cern/lhcc 2018-14 lhcb tdr 18 · 2018. 11. 27. · european organization for nuclear research...

53
TDR UPGRADE CERN/LHCC 2018-14 LHCb TDR 18 26 November 2018 b d i d l / Technical Design Report

Upload: others

Post on 10-Apr-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

CER

N-L

HC

C-2

018-

014

/LH

CB-

TDR

-018

27/1

1/20

18

TDRUPGRADE

CERN/LHCC 2018-14

LHCb TDR 18

26 November 2018

b d

i

dl

/

Technical Design Report

Page 2: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)

CERN-LHCC-2018-014LHCB-TDR-018

LHCb-PUB-2018-01226 November 2018

LHCb Upgrade Computing Model

Technical Design Report

The LHCb collaboration

c© 2018 CERN for the benefit of the LHCb collaboration. CC-BY-4.0 licence.

Page 3: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

ii

Page 4: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

LHCb Collaboration

I. Bediaga, M. Cruz Torres, J.M. De Miranda, A.C. dos Reis, A. Gomesa, A. Massafferri, R. Santana,l. Soares Lavra, R. Tourinho Jadallah Aoude1Centro Brasileiro de Pesquisas Fısicas (CBPF), Rio de Janeiro, Brazil

S. Amato, K. Carvalho Akiba, F. Da Cunha Marinho, L. De Paula, F. Ferreira Rodrigues,M. Gandelman, J.H. Lopes, I. Nasteva, J.M. Otalora Goicochea, E. Polycarpo, M.S. Rangel,L. Silva de Oliveira, B. Souza De Paula2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil

C. Chen, Y. Gan, Y. Gao, C. Gu, F. Jiang, X. Liu, Y. Luo, Z. Ren, J. Sun, Z. Tang, M. Wang, A. Xu,Z. Xu, Z. Yang, L. Zhang, W.C. Zhangz, X. Zhu3Center for High Energy Physics, Tsinghua University, Beijing, China

N. Beliy, J. He, W. Huang, P.-R. Li, X. Lyu, W. Qian, J. Qin, M. Saur, M. Szymanski, D. Vieira,Q. Xu, Y. Zheng4University of Chinese Academy of Sciences, Beijing, China

Y. Li, J. Wang5Institute Of High Energy Physics (ihep), Beijing, China

M. Chefdeville, D. Decamp, Ph. Ghez, J.F. Marchand, M.-N. Minard, B. Pietrzyk, M. Reboud,S. T’Jampens, E. Tournefier, Z. Xu6Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy, France

Z. Ajaltouni, E. Cogneras, O. Deschamps, G. Gazzoni, C. Hadjivasiliou, M. Kozeiha, R. Lefevre,J. Maratasw, S. Monteil, P. Perret, B. Quintana, V. Tisserand, M. Vernet7Clermont Universite, Universite Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France

J. Arnau Romeu, E. Aslanides, J. Cogan, D. Gerstel, R. Le Gac, O. Leroy, G. Mancinelli, C. Meaux,A.B. Morris, A. Poluektov, J. Serrano, A. Tsaregorodtsev8Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France

Y. Amhis, V. Balagurab, G.C. Barrand, S. Barsuk, D. Chamont, J.A.B. Coelho, F. Desse, F. Fleuretb,H. Grasland, V. Lisovskyi, F. Machefert, C. Marin Benito, E. Mauriceb, P. Robbe, M.H. Schune,A. Stocchi, A. Usachov, M. Winn, G. Wormser9LAL, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay, France

E. Ben-Haim, E. Bertholet, P. Billoir, M. Charles, L. Del Buono, G. Dujany, V.V. Gligorov,A. Hennequin, A. Mogini, F. Polci, R. Quagliani, F. Reiss, A. Robert, E.S. Sepulveda, D.Y. Tou,D. Vom Bruch10LPNHE, Sorbonne Universite, Paris Diderot Sorbonne Paris Cite, CNRS/IN2P3, Paris, France

A. Khalfa11Centre de Calcul de l’Institut National de Physique Nuclaire et de Physique des Particules, Villeurbanne, France

S. Beranek, M. Boubdir, S. Escher, A. Guth, J. Heuel, T. Kirn, C. Langenbruch, M. Materok,S. Nieswand, S. Schael, E. Smith, M. Whitehead, V. Zhukov38

12I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany

J. Albrecht, A. Battig, M.S. Bieker, A. Birnkraut, M. Demmer, U. Eitschberger, R. Ekelhof, L. Funke,L. Gavardi, K. Heinicke, A. Heister, P. Ibis, P. Mackowiak, A. Modden, T. Mombacher, J. Muller,V. Muller, R. Niet, S. Reichert, M. Schellenberg, T. Schmelzer, A. Seuthe, B. Spaan, H. Stevens,T. Tekampe13Fakultat Physik, Technische Universitat Dortmund, Dortmund, Germany

iii

Page 5: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

H.-P. Dembinski, T. Klimkovich, M. Schmelling, M. Zavertyaevc

14Max-Planck-Institut fur Kernphysik (MPIK), Heidelberg, Germany

S. Bachmann, D. Berninghoff, M. Borsato, S. Braun, A. Comerma-Montells, P. d’Argent, M. Dziewiecki,D. Gerick, J.P. Grabowski, X. Han, S. Hansmann-Menzemer, J. Hu, M. Kecke, M. Kolpin, R. Kopecna,B. Leverington, H. Malygina, J. Marks, D.S. Mitzel, S. Neubert, A. Piucci, N. Skidmore, M. Stahl,S. Stemmle, U. Uwer, A. Zhelezov15Physikalisches Institut, Ruprecht-Karls-Universitat Heidelberg, Heidelberg, Germany

R. McNulty, N.V. Raab16School of Physics, University College Dublin, Dublin, Ireland

M. De Seriod, R.A. Fini, A. Palano, A. Pastore, S. Simoned17INFN Sezione di Bari, Bari, Italy

F. Betti46, L. Capriotti, A. Carbonee, F. Cindolo, A. Falabella, F. Ferrari, D. Gallie, U. Marconi,D.P. O’Hanlon, C. Patrignanie, M. Soares, V. Vagnoni, G. Valenti, S. Zucchelli18INFN Sezione di Bologna, Bologna, Italy

M. Andreotti, W. Baldini, C. Bozzi46, R. Calabreseg, M. Corvog, M. Fiorinig, E. Luppig, L. Minzonig,L.L. Pappalardog, B.G. Siddi, I. Skiba, G. Tellarini, L. Tomassettig, S. Vecchi19INFN Sezione di Ferrara, Ferrara, Italy

L. An, L. Anderlini, A. Bizzetiu, G. Graziani, G. Passaleva46, M. Veltrir20INFN Sezione di Firenze, Firenze, Italy

P. Albicocco, G. Bencivenni, P. Campana, P. Ciambrone, P. De Simone, P. Di Nezza, S. Klaver,G. Lanfranchi, G. Morello, M. Palutan, M. Poli Lener, M. Rotondo, M. Santimaria46, A. Sartik,F. Sborzacchi, B. Sciascia21INFN Laboratori Nazionali di Frascati, Frascati, Italy

M. Bartolini, R. Cardinale, G. Cavallero, F. Fontanellih, A. Petrolinih22INFN Sezione di Genova, Genova, Italy

N. Bellolii, M. Calvii, P. Carnitii, D. Fazzini46,i, C. Gottii, C. Matteuzzi23INFN Sezione di Milano-Bicocca, Milano, Italy

J. Fuq, P. Gandini, D. Marangottoq, A. Merliq, N. Neriq, M. Petruzzoq, E. Spadaro Norellaq24INFN Sezione di Milano, Milano, Italy

B. Audurier, S. Belin, D. Brundu46, A. Bursche, S. Cadeddu, A. Cardini, S. Chen, A. Contu, F. Dordei,P. Griffith, A. Lai, A. Loi, G. Mancaf , R. Oldemanf , B. Saittaf25INFN Sezione di Cagliari, Monserrato, Italy

S. Amerio, A. Bertolin, S. Gallorini, A. Gianelle, D. Lucchesio, A. Lupato, E. Michielin, M. Morandin,L. Sestini, G. Simio26INFN Sezione di Padova, Padova, Italy

F. Bedeschi, R. Cencip, A. Lusiani, M.J. Morellot, T. Pajerot, G. Punzip, M. Rama, S. Stracka,D. Tonelli, G. Tucip, J. Walsh27INFN Sezione di Pisa, Pisa, Italy

G. Carboni, E. Santovettij , A. Satta28INFN Sezione di Roma Tor Vergata, Roma, Italy

V. Bocci, G. Martellotti, G. Penso, D. Pinci, R. Santacesaria, C. Satrianos, A. Sciubbak29INFN Sezione di Roma La Sapienza, Roma, Italy

iv

Page 6: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

R. Aaij, F. Archilli, L.J. Bel, S. Benson, E. Dall’Occo, J.A. de Vries, L. Dufour, S. Esen, E. Govorkova,R. Greim, W. Hulsbergen, D. Hynds, E. Jans, P. Koppenburg, I. Kostiuk50, M. Merk, M. Mulder,A. Pellegrino, C. Sanchez Gras, J. Templon, N. Tuning46, M. van Beuzekom, J. van Tilburg,M. van Veghel, C. Vazquez Sierra, M. Veronesi, A. Vitkovskiy30Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands

T. Ketel, G. Raven31Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands

J. Bhom, J. Brodzicka, A. Dziurda, W. Kucewiczl, M. Kucharczyk, T. Lesiak, A. Ossowska, M. Pikies,M. Witek, M. Zdybal32Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Krakow, Poland

A. Dendek, M. Firlej, T. Fiutowski, M. Idzik, W. Krupa, M.W. Majewski, J. Moron,A. Oblakowska-Mucha, B. Rachwal, K. Swientek, T. Szumlak, M. Tobin33AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Krakow,

Poland

V. Batozskaya, H.K. Giemza, K. Klimaszewski, W. Krzemien, D. Melnychuk, A. Szabelski, A. Ukleja,W. Wislicki34National Center for Nuclear Research (NCBJ), Warsaw, Poland

L. Cojocariu, A. Ene, L. Giubega, A. Grecu, T. Ivanoaica, F. Maciuc, V. Placinta, M. Straticiuc35Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania

G. Alkhazov, N. Bondar, A. Chubykin, A. Dzyuba, A. Inglessi, K. Ivshin, S. Kotriakhova, O. Maev46,D. Maisuzenko, N. Sagidova, Y. Shcheglov,†, M. Stepanova, A. Vorobyev, N. Voropaev36Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia

I. Belyaev, A. Danilina, V. Egorychev, D. Golubkov, T. Kvaratskheliya46, T. Ovsiannikova, D. Pereima,D. Savrina38, A. Semennikov37Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia

A. Berezhnoy, I.V. Gorelov, A. Leflat, N. Nikitin, V. Volkov38Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia

S. Filippov, E. Gushchin, L. Kravchuk39Institute for Nuclear Research of the Russian Academy of Sciences (INR RAS), Moscow, Russia

K. Arzymatov, A. Baranov, M. Borisyak, V. Chekalina, E. Khairullin, F. Ratnikov76, A. Ustyuzhanin76

40Yandex School of Data Analysis, Moscow, Russia

A. Bondarx, S. Eidelmanx, P. Krokovnyx, V. Kudryavtsevx, T. Maltsevx, L. Shekhtmanx, V. Vorobyevx

41Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia

A. Artamonov, K. Belous, R. Dzhelyadin,†, Yu. Guz46, A. Inyakin, V. Obraztsov, A. Popov,S. Poslavskii, V. Romanovskiy, M. Shapkin, O. Stenyakin, O. Yushchenko42Institute for High Energy Physics (IHEP), Protvino, Russia

A. Alfonso Albero, M. Calvo Gomezm, A. Cambonim, J. Casals Hernandez, S. Coquereau, L. Garrido,D. Gascon, P. Gironella Gironell, R. Graciani Diaz, E. Grauges, X. Vilasis-Cardonam43ICCUB, Universitat de Barcelona, Barcelona, Spain

B. Adeva, A.A. Alves Jr, O. Boente Garcia, V. Chobanova, X. Cid Vidal, J. Dalsenov, A. Dosil Suarez,A. Fernandez Prieto, A. Gallas Torreira, B. Garcia Plana, M. Lucio Martinez, D. Martinez Santos,M. Plo Casasus, J. Prisciandaro, C. Prouve, M. Ramos Pernas, A. Romero Vidal, J.J. Saborido Silva,B. Sanmartin Sedes, C. Santamarina Rios, P. Vazquez Regueiro, M. Vieites Diaz44Instituto Galego de Fısica de Altas Enerxıas (IGFAE), Universidade de Santiago de Compostela, Santiago de

Compostela, Spain

v

Page 7: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

P. Kardos45Uppsala universitet, Uppsala, Sweden

F. Alessio, M.P. Blago, M. Brodski, J. Buytaert, W. Byczynski, D.H. Campora Perez, M. Cattaneo,Ph. Charpentier, S.-G. Chitic, M. Chrzaszcz, G. Ciezarek, M. Clemencic, J. Closier, V. Coco, P. Collins,T. Colombo, G. Coombs, G. Corti, B. Couturier, C. D’Ambrosio, O. De Aguiar Francisco, K. De Bruyn,A. Di Canto, H. Dijkstra, M. Dorigoy, P. Durante, C. Farber, M. Feo, P. Fernandez Declara,M. Ferro-Luzzi, M. Fontana, R. Forty, M. Frank, C. Frei, W. Funk, C. Gaspar, L.A. Granado Cardoso,L. Gruber, T. Gys, C. Haen, M. Hadji, C. Hasse, M. Hatch, B. Hegner, R. Jacobsson, D. Johnson,C. Joram, B. Jost, M. Karacson, B. Khanji, D. Lacarrere, F. Lemaitre, R. Lindner, O. Lupton,B. Malecki, M. Martinelli, R. Matev, Z. Mathe, D. Muller, N. Neufeld, N.S. Nolte, A. Pearce,M. Pepe Altarelli, S. Perazzini, J. Pinzino, F. Pisani, S. Ponce, L.P. Promberger, M. Ravonel Salzgeber,M. Roehrken, S. Roiser, T. Ruf, H. Schindler, B. Schmidt, A. Schopper, R. Schwemmer, P. Seyfert,F. Stagni, S. Stahl, F. Teubert, E. Thomas, S. Tolk, A. Valassi, S. Valat, E. van Herwijnen,R. Vazquez Gomez, J.V. Viana Barbosa, B. Voneki, K. Wyllie, Y. Zhang46European Organization for Nuclear Research (CERN), Geneva, Switzerland

G. Andreassi, V. Battista, A. Bay, V. Bellee, F. Blanc, M. De Cian, L. Ferreira Lopes, C. Fitzpatrick,O.G. Girard, G. Haefeli, P.H. Hopchev, C. Khurewathanakul, V.S. Kirsebom, A.K. Kuonen, V. Macko,M. Marinangeli, P. Marino, B. Maurin, T. Nakada, T. Nanut, T.D. Nguyen, C. Nguyen-Maun, P.R. Pais,L. Pescatore, G. Pietrzyk, F. Redi, A.B. Rodrigues, O. Schneider, M. Schubiger, S. Schulte, P. Stefko,M.E. Stramaglia, M.T. Tran47Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland

C. Abellan Beteta, M. Atzeni, R. Bernet, C. Betancourt, Ia. Bezshyiko, A. Buonaura, J. Garcıa Pardinas,E. Graverini, D. Lancierini, F. Lionetto, A. Mauri, K. Muller, P. Owen, A. Puig Navarro, N. Serra,R. Silva Coutinho, O. Steinkamp, B. Storaci, U. Straumann, A. Vollhardt, Z. Wang, A. Weiden48Physik-Institut, Universitat Zurich, Zurich, Switzerland

A. Dovbnya, S. Kandybei49NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine

S. Koliiev, V. Pugatch50Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine

S. Bifani, R. Calladine, G. Chatzikonstantinidis, N. Farley, P. Ilten, C. Lazzeroni, J. Plews, D. Popov14,A. Sergi, M.W. Slater, N.K. Watson, T. Williams, K.A. Zarebski51University of Birmingham, Birmingham, United Kingdom

M. Adinolfi, S. Bhasin, E. Buchanan, M.G. Chapman, J.M. Kariuki, S. Maddrell-Mander, P. Naik,K. Petridis, G.J. Pomery, E. Price, J.H. Rademacker, S. Richards, J.J. Velthuis52H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom

M.O. Bettler, H.V. Cliff, B. Delaney, J. Garra Tico, V. Gibson, S.C. Haines, C.R. Jones, F. Keizer,M. Kenzie, G.H. Lovell, J.G. Smeaton, A. Trisovic, A. Tully, M. Vitti, D.R. Ward, I. Williams,S.A. Wotton53Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom

J.J. Back, T. Blake, A. Brossa Gonzalo, C.M. Costa Sobral, A. Crocombe, T. Gershon, M. Kreps,T. Latham, D. Loh, A. Mathad, E. Millard, M. Vesterinen54Department of Physics, University of Warwick, Coventry, United Kingdom

S. Easo, R. Nandakumar, A. Papanestis, S. Ricciardi, F.F. Wilson55STFC Rutherford Appleton Laboratory, Didcot, United Kingdom

P.E.L. Clarke, G.A. Cowan, R. Currie, S. Eisenhardt, E. Gabriel, S. Gambetta, K. Gizdov, F. Muheim,M. Needham, M. Pappagallo, S. Petrucci, S. Playfer, I.T. Smith, J.B. Zonneveld56School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom

vi

Page 8: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

M. Alexander, J. Beddow, D. Bobulska, C.T. Dean, L. Douglas, L. Eklund, S. Karodia, I. Longstaff,M. Schiller, F.J.P. Soler, P. Spradlin, M. Traill57School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom

T.J.V. Bowcock, G. Casse, F. Dettori, K. Dreimanis, S. Farry, V. Franco Lima, T. Harrison,K. Hennessy, D. Hutchcroft, P.J. Marshall, J.V. Mead, K. Rinnert, T. Shears, H.M. Wark, L.E. Yeomans58Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom

P. Alvarez Cartelle, S. Baker, U. Egede, A. Golutvin75, M. Hecker, T. Humair, F. Kress, M. McCann46,R.D. Moise, R. Newcombe, M. Patel, M. Smith, S. Stefkova, M.J. Tilley, D. Websdale59Imperial College London, London, United Kingdom

R.J. Barlow, W. Barter, S. Borghi46, C. Burr, A. Davis, S. De Capua, D. Dutta, E. Gersabeck,M. Gersabeck, L. Grillo, R. Hidalgo Charman, M. Hilton, G. Lafferty, K. Maguire, A. McNab,D. Murray, C. Parkes46, G. Sarpis, M.R.J. Williams60School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom

M. Bjørn, B.R. Gruberg Cazon, T. Hadavizadeh, T.H. Hancock, N. Harnew, D. Hill, J. Jalocha,M. John, N. Jurik, S. Malde, C.H. Murphy, A. Nandi, M. Pili, H. Pullen, V. Renaudin, A. Rollings,G. Veneziano, G. Wilkinson61Department of Physics, University of Oxford, Oxford, United Kingdom

T. Boettcher, D.C. Craik, C. Weisser, M. Williams62Massachusetts Institute of Technology, Cambridge, MA, United States

S. Akar, T. Evans, Z.C. Huard, B. Meadows, E. Rodrigues, H.F. Schreiner, M.D. Sokoloff63University of Cincinnati, Cincinnati, OH, United States

J.E. Andrews, B. Hamilton, A. Jawahery, W. Parker, Y. Sun, Z. Yang64University of Maryland, College Park, MD, United States

M. Artuso, B. Batsukh, A. Beiter, S. Blusk, S. Ely, M. Kelsey, K.E. Kim, Z. Li, X. Liang, R. Mountain,I. Polyakov, M.S. Rudolph, T. Skwarnicki, S. Stone, A. Venkateswaran, M. Wilkinson, Y. Yao, X. Yuan65Syracuse University, Syracuse, NY, United States

A. Hicheur66Laboratory of Mathematical and Subatomic Physics , Constantine, Algeria, associated to 2

C. Gobel, V. Salustino Guimaraes67Pontifıcia Universidade Catolica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to 2

G. Liu68South China Normal University, Guangzhou, China, associated to 3

H. Cai, L. Sun69School of Physics and Technology, Wuhan University, Wuhan, China, associated to 3

B. Dey, W. Hu, M. Mukherjee, Y. Wang, D. Xiao, Y. Xie, M. Xu, H. Yin, J. Yuaa, D. Zhang70Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to 3

D.A. Milanes, I.A. Monroy, J.A. Rodriguez Lopez71Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to 10

O. Grunberg, M. Heß, N. Meinert, H. Viemann, R. Waldi72Institut fur Physik, Universitat Rostock, Rostock, Germany, associated to 15

C.J.G. Onderwater73Van Swinderen Institute, University of Groningen, Groningen, Netherlands, associated to 30

vii

Page 9: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

T. Likhomanenko, A. Malinin, O. Morgunova, A. Nogay, A. Petrov, A. Rogovskiy, V. Shevchenko74National Research Centre Kurchatov Institute, Moscow, Russia, associated to 37

F. Baryshnikov, S. Didenko, N. Polukhinac, E. Shmanin75National University of Science and Technology “MISIS”, Moscow, Russia, associated to 37

D. Derkach, M. Hushchyn, N. Kazeev76National Research University Higher School of Economics, Moscow, Russia, associated to 40

G. Panshin, S. Strokov, A. Vagner77National Research Tomsk Polytechnic University, Tomsk, Russia, associated to 37

L.M. Garcia Martin, L. Henry, B.K. Jashal, F. Martinez Vidal, A. Oyanguren, C. Remon Alepuz,J. Ruiz Vidal, C. Sanchez Mayordomo78Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain, associated to 43

C.A. Aidala, W. Dean, J.D. Roth79University of Michigan, Ann Arbor, United States, associated to 65

C.L. Da Silva, J.M. Durham80Los Alamos National Laboratory (LANL), Los Alamos, United States, associated to 65

aUniversidade Federal do Triangulo Mineiro (UFTM), Uberaba-MG, BrazilbLaboratoire Leprince-Ringuet, Palaiseau, FrancecP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, RussiadUniversita di Bari, Bari, ItalyeUniversita di Bologna, Bologna, ItalyfUniversita di Cagliari, Cagliari, ItalygUniversita di Ferrara, Ferrara, ItalyhUniversita di Genova, Genova, ItalyiUniversita di Milano Bicocca, Milano, ItalyjUniversita di Roma Tor Vergata, Roma, ItalykUniversita di Roma La Sapienza, Roma, ItalylAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications,Krakow, PolandmLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, SpainnHanoi University of Science, Hanoi, VietnamoUniversita di Padova, Padova, ItalypUniversita di Pisa, Pisa, ItalyqUniversita degli Studi di Milano, Milano, ItalyrUniversita di Urbino, Urbino, ItalysUniversita della Basilicata, Potenza, ItalytScuola Normale Superiore, Pisa, ItalyuUniversita di Modena e Reggio Emilia, Modena, ItalyvH.H. Wills Physics Laboratory, University of Bristol, Bristol, United KingdomwMSU - Iligan Institute of Technology (MSU-IIT), Iligan, PhilippinesxNovosibirsk State University, Novosibirsk, RussiaySezione INFN di Trieste, Trieste, ItalyzSchool of Physics and Information Technology, Shaanxi Normal University (SNNU), Xi’an, ChinaaaPhysics and Micro Electronic College, Hunan University, Changsha City, China

†Deceased

viii

Page 10: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Acknowledgments

The LHCb collaboration is greatly indebted to Philippe Charpentier for the invaluable helpgiven to the preparation of this document. He is and will be a pillar of the success of LHCb.

ix

Page 11: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Contents

1 Introduction and scope 1

2 Historical evolution of the computing model during LHC Run 1 and Run 2 3

3 LHCb Upgrade data processing flow 63.1 The Turbo persistence model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2 File formats and event sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3 Offline data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.4 User analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.5 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4 Evolution of the trigger strategy 114.1 Extrapolation of Run 2 trigger output to the Upgrade . . . . . . . . . . . . . . . 114.2 Output bandwidth scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.3 Risk considerations concerning output bandwidth scenarios . . . . . . . . . . . . 154.4 Analysis case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.4.1 Charm CP violation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.4.2 Inclusive beauty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.4.3 Excited charm spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5 Resource provisioning 205.1 Computing infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.2 Pledged computing resources via WLCG . . . . . . . . . . . . . . . . . . . . . . . 215.3 Non-pledged computing resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.3.1 Online farm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.3.2 Other opportunistic resources . . . . . . . . . . . . . . . . . . . . . . . . . 23

6 Resource requirements 256.1 Common assumptions for the resource requirement model . . . . . . . . . . . . . 25

6.1.1 Luminosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.1.2 Monte Carlo simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.1.3 Offline data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.1.4 Data set replicas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.1.5 Evolution of present resources . . . . . . . . . . . . . . . . . . . . . . . . . 286.1.6 Other common assumptions and units . . . . . . . . . . . . . . . . . . . . 29

6.2 Baseline resource requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.2.1 Storage resource requirements . . . . . . . . . . . . . . . . . . . . . . . . . 296.2.2 CPU resource requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 296.2.3 Risk analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.3 Optional mitigation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316.3.1 Data parking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

x

Page 12: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

6.3.2 Reduced HLT output bandwidth . . . . . . . . . . . . . . . . . . . . . . . 326.3.3 Aggressive fast simulation development . . . . . . . . . . . . . . . . . . . 336.3.4 Mitigation discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

7 Conclusion 36

xi

Page 13: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 1

Introduction and scope

The LHCb Upgrade detector [1] will operate through LHC Runs 3 and 4. The present LHCbdetector will be partially dismantled during LHC Long Shutdown 2 (LS2) and a new detector willbe installed, with more than 90% of the active detector channels replaced. The LHCb Upgradeis driven by a new paradigm for the trigger selection process yielding a much higher efficiency [2],which implies a necessary change in the offline computing model. The engineering aspects ofboth the new core software framework and the distributed computing infrastructure have beendiscussed in the Upgrade Software and Computing TDR [3]. Owing to the five-times higherinstantaneous luminosity and higher foreseen trigger efficiency, the LHCb Upgrade will have asignal yield per time unit approximately ten times higher than that of the current experiment.The pileup will also increase resulting in an average event size increase by a factor of 3. As aconsequence a large increase in data volume by more than a factor thirty is expected by anextrapolation of the current experiment output rates.

Nevertheless, the computing resource requirements are substantially mitigated by the novelreal-time data processing model that is being introduced and by the massive use of fast simulationtechniques. Significant elements of these new strategies have already been tested, albeit at areduced scale, during Run 2.

Given the unavoidably larger output data volume expected after the LHCb Upgrade, storageresource requirements have the highest priority to ensure full exploitation of the investment putinto the upgraded experiment in terms of physics yield.

This document presents the changes in the computing model and the associated offlinecomputing resources needed for the LHCb Upgrade, that are defined by two main factors:a significantly increased trigger output rate compared to the current experiment; and thecorresponding necessity to generate significantly larger samples of simulated events. The updateof the LHCb computing model for Run 3, and beyond, is discussed with an emphasis on theoptimization that has been applied to the usage of distributed CPU and storage resources. Inthis respect the present Technical Design Report represents the necessary complement to theUpgrade Software and Computing TDR.

A historical overview of the evolution of the LHCb computing model during LHC Runs 1 and2 is given in Chap. 2, where the main novel features impacting the resource needs are describedand motivated. Chapter 3 presents the logical workflow for the centralised production of real andsimulated data sets, and for physics analysis by members of the Collaboration. This representsthe computing model that will be adopted for the LHCb Upgrade. The physics motivations thatunderpin the choice of the trigger output bandwidth for the LHCb Upgrade are summarisedin Chap. 4. The infrastructural resources needed for data processing are described in Chap. 5Finally, the resource requirements are discussed in Chap. 6. A baseline resource requirementscenario is presented, followed by illustrative examples of optional alternative strategies that canpotentially be adopted. The impact on the physics programme and the risks and overheads for

1

Page 14: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

the computing operations implied by these alternative options are also discussed. The chapterincludes a detailed time evolution of each type of resource (CPU, disk, tape) during Run 3 andLS3 (2021–2025).

It is assumed that the same model will be extended also to Run 4. However, enough flexibilityis foreseen in the model so that LHCb will be able to profit from all the technology improvementsand developments that are being considered for the High-Luminosity LHC (HL-LHC) phase.

2

Page 15: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 2

Historical evolution of thecomputing model during LHC Run 1and Run 2

The LHCb computing model for Run 1, defined in the LHCb Computing TDR [4], followeda variant of the MONARC scheme [5], where the computing and storage infrastructure areorganised into tier levels providing dedicated resources for different aspects of the computingneeds of the experiment. The LHCb tiers at the time of Run 1 were the Tier 0 at CERN, 6Tier 1 centres at INFN-T1 (Italy), FZK-LCG2 (Germany), IN2P3-CC (France), NL-T1 (TheNetherlands), PIC (Spain) and RAL-LCG2 (UK), and further 14 Tier 2 centres. Both tape anddisk storage were concentrated at the Tier 0 and Tier 1 sites, where the tape was used for storingraw and reconstructed data, as well as for archiving derived data sets. The disk storage wasdivided into separate spaces for users and production data, where the latter included real andsimulated data, and buffer storage for data processing activities. Tier 0 and Tier 1 centres werealso primarily used for data processing and user analysis and, only in presence of free resources,also for Monte Carlo simulation. Tier 2 centres were used for the production of simulated eventsand, to a lesser extent, for user analysis without input data.

During Run 2, the strict relation between tier levels and different computing activities wasrelaxed [6]. “Mesh processing” [7] made it possible to include computing centres other thanTier 0 and Tier 1 into the data processing activities. In this model, input data can be downloadedfrom a remote storage site and processed locally. The output can then uploaded back to theorigin storage site (see also Sect. 5.2). The number of Tier 1 centres was also increased with theinclusion of RRCKI-T1 (Russia). In addition, the number of sites with disk was increased witha dozen Tier 2 sites converted into Tier2-D sites1 which provided additional disk space for realand simulated data used by analysts.

A major change, with direct consequences on Run 2 offline processing, was the introductionof real-time alignment and calibration, at its highest quality, already at the high-level triggerstage [8]. As of Run 2, the high-level trigger was indeed split into two stages. The first stage,HLT1, receives events at a rate of up to 1 MHz from the hardware trigger (L0) and selectsthem further performing a first full track reconstruction. The intermediate data are bufferedon disk and used to obtain high-quality calibration and alignments. The temporary storeddata are then processed asynchronously by the second stage of the trigger, HLT2, using thebest alignment and calibration information. Since calibration constants are already applied attrigger level, the offline reconstruction step needed for calibration could be skipped, saving CPU

1The term Tier2-D refers to extended-scope Tier 2 sites equipped with enough storage to hold data (hence“-D”) and allow in-situ data analysis.

3

Page 16: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

resources. Consequently, any offline data reprocessing in addition to reconstruction is limited todata filtering, which is known in LHCb as stripping. In the stripping process, many selectionalgorithms (stripping lines) are executed, in order to classify the events for an easier subsequentuse at the analysis stage.

Profiting from the availability of real time calibration and alignments, another substantialstep forward was the introduction of the Turbo stream [9], in addition to the FULL stream. Inthe FULL stream, which represents the traditional particle physics model to persist raw data,all the zero-suppressed raw detector data are saved for further offline processing. In the newlyintroduced Turbo stream instead, the raw data are discarded, and only a set of higher-levelreconstructed physics objects necessary for a given physics analysis are persisted along withmonitoring information. Data in the Turbo stream are fully reconstructed at the high-leveltrigger stage. Depending on the specific trigger selection, information ranging from a reducednumber of physics objects (e.g. primary and secondary vertices and decay tracks) to full eventis persisted (selective persistence). The events are saved in a compressed format optimized forwriting speed (MDF format) that is used for all online streams. Offline processing of Turboevents requires very limited CPU resources, at the permille level of the total, as it implies onlythe conversion from MDF to Root [10] format and the inclusion of luminosity information. Adedicated calibration stream, TurCal [11], was also introduced to store the calibration samplesused to study the performance of tracking, particle identification and photon reconstruction,to correct the simulated samples, to monitor the detector performance during the data-taking,and to assess systematic effects. In that stream, after the online full event reconstruction, theinformation related to the selected calibration candidates is stored together with raw detectordata.

The main logical processing chains during Run 1 and Run 2 periods are the simulationworkflow and the classical data processing for real data, with the Turbo data processing workflowadded in Run 2 (see Fig. 2.1). The offline data processing of the FULL stream proceeds in twosteps. In the first step, the event reconstruction is performed converting the raw data intophysics objects. In the subsequent step, the stripping, data are further selected and classified instreams2, and the events are saved in reduced formats. In Run 2, HLT2 being able to producehigh-quality aligned and calibrated data, the reconstructed physics objects were also availabledirectly after the HLT processing via the Turbo stream. The amount of data processed throughthe Turbo stream increased during the Run 2 data taking years, so that they were separated

The simulation workflow includes an event generation step and the simulation of the interac-tions of particles with the detector material based on Geant4 [12–14]. Both steps are handled bythe LHCb simulation framework Gauss [15]. In a subsequent step the response of the detector, ofits front-end electronics and the subsequent digitisation and hardware or firmware processing aresimulated. The output of the detector front end electronics, its subsequent hardware processing,and the hardware trigger and event building are described in Ref. [2].

Both the real and simulated events are processed, online and offline respectively, by thehigh-level trigger software, which performs the event reconstruction and selects events to beretained for offline analysis 3. The data produced after the HLT processing are exported fromthe LHCb pit to the CERN Tier 0 and stored in the mass storage system (CASTOR). A secondcopy is stored in the Tier 1 centers.

2In Run 1 and Run 2 the events are categorised in ten main streams: broadly corresponding to selectedb-hadrons and c-hadrons, dimuonic, semileptonic, leptonic and radiative final states, electroweak physics andminimum-bias events. Additional calibration streams are also used.

3During Run 1 a first version of the detector alignment and calibration was applied offline during this step.The raw data from the detector were stored and usually reconstructed again offline, typically after the end of thedata taking year, applying improved alignment and calibration constants.

4

Page 17: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Random seed

Simulation framework

• Event generation• Detector response

Digitisation

• Front-end electronics

MC simulation

LHC beam

LHCb detector

• Front-end electronics• Event building• Hardware trigger

Software trigger

• Staged event selection• Detector alignment and calibration• Partial/full event reconstruction

Reconstruction

• Full event reconstruction

Stripping

• Data selection and streaming• Event size reduction

FULL stream

Turbo Stream

• Event format conversion• Adding luminosity info• Data streaming

User analysis

• Event selection• Analysis data format preparation

Figure 2.1: Schematic view of real and simulated data processing flow during Run 2.

5

Page 18: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 3

LHCb Upgrade data processing flow

The data processing flow (logical workflow) for the LHCb Upgrade in Run 3 and beyond isdiscussed in detail in Ref. [3]. For the sake of clarity, some of the more important concepts aresummarised in this chapter.

Building on the experience developed during LHC Run 2, in the LHCb Upgrade most of theactivities related to data processing, such as event reconstruction and calibration and alignmentof sub-detectors, will be performed online. The output produced by the high-level trigger willbe stored on tape through the three FULL, Turbo and calibration (TurCal) streams. The Turbo

stream will undergo only minimal offline processing before being stored to disk. The FULL andTurCal streams will instead undergo the stripping process that will streamline the events andreduce their size.

3.1 The Turbo persistence model

The basic concepts of the Turbo persistence model are described in details in Ref. [9] and depictedin Fig. 3.1.

In the Turbo persistence model, once a candidate decay is selected by the HLT2, as abare minimum only the objects involved in the trigger decision, plus all the primary verticesare persisted in the Turbo stream. The event selection can be further customised (selectivepersistence) to save additional objects, like e.g. other tracks coming from a primary vertex orobjects contained in a cone around the candidate, up to the full event, including possibly someraw data banks. A special use case of the last option is represented by the calibration TurCal

stream where events selected for detector alignment and calibration are persisted. An eventis stored in the FULL stream if at least one trigger selection requires to persist the full event.An extensive discussion of scenarios for bandwidth division between Turbo and FULL streams isgiven in Chap.4.

3.2 File formats and event sizes

The Turbo and FULL streams will be saved in different file formats, allowing for further offlineprocessing, as described in the following sections. The LHCb data formats are the following:

• RDST: used to store reconstructed complete events for the FULL and TurCal streams, withthe possible addition of selected RAW banks that may be necessary in the subsequentprocessing steps. This format is persisted (tape only) and is used as input to the strippingstage.

6

Page 19: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

HLT2candidate

PV

D0

π+K−PV

PV

D0

π+K−PV

PV

D0

π+K−PV

. . .RICHVELORaw banks: ECAL . . .

Figure 3.1: The Turbo persistence model. Top: a candidate D0 → K−π+ is selected by HLT2; as aminimum, only the candidate and the primary vertices (PV) are persisted. Middle: additional objects,e.g. soft pion tracks from candidate D∗+→ D0π+ decays, can be selectively persisted. Bottom: optionally,the full reconstructed event can also be persisted, including some raw subdetector data banks

• TurboRAW: used to store the reconstructed and selected candidates for the Turbo stream.The events are packed, serialized and persisted (tape only) in a raw format optimisedfor writing speed. This format is the input to the Tesla application [9] as described inSect. 3.3.

• (M)DST: used to store the output of the stripping, in the Root I/O format. The completereconstructed events are persisted in DST format (on both disk and tape archive), whilefor MDSTs (“micro-DST”) a selective persistence is applied. The MDST format is alsothe output of the Tesla application. In both cases, reconstructed physics objects (tracks,vertices, etc.) are persisted. These formats are accessed by users for further analysis.

• RAW: used to store the zero-suppressed output of the detector electronics, which is used as

7

Page 20: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

the input to the reconstruction; this format is transient for the Turbo stream and persistent(tape only) for the TurCal stream. It is envisaged to be optionally used also for the FULL

stream during the detector commissioning phase.

The format of the various streams coming out of the online system is therefore:

• Turbo: TurboRAW

• FULL: RDST (plus, optionally, RAW, during the commissioning phase) (see Sect. 6.1.1)

• TurCal: RDST+RAW

The content of the TurboRAW and MDST files is defined depending on the trigger selections(trigger lines) as outlined in Sect. 3.1. The smallest data sizes are obtained when the storedinformation is only from the candidate particles that triggered the event, which are part ofthe physics signal that will be analysed offline. Alternatively, some physics cases may requireadditional information to be stored, like for example tracks or calorimeter clusters in a conearound the signal candidates. In some other cases the complete reconstructed event will berequired. The size of an event therefore depends on the amount of information that is stored,ranging from about a few kB, when only signal candidates are saved, to tens of kB if extrainformation is added, to roughly 200–250 kB if the full event is persisted in the RDST or RAWformats.

3.3 Offline data processing

The online reconstruction and trigger selection are performed by the Moore [16] application.This program executes in the order of thousand trigger lines, each of which is associated toone of the three (Turbo, TurCal or FULL) streams that are subsequently stored offline. Eventsin the FULL and TurCal streams are processed offline by the DaVinci [17] application, whichselects events by executing stripping lines (see Chap. 2), and classifies them in streams. Eachof the stripping lines is associated to a specific analysis, or group of closely related analyses,and the output information can be persisted in either the DST or MDST format. In the lattercase, the Turbo selective persistence model is also applied. For the Turbo stream, that will takethe bulk of the events (see Chap. 4), the Tesla application converts the information from theraw format into Root I/O objects such as tracks, calorimeter clusters and particles, ready tobe used for physics analysis, adds the luminosity information, and persists them in the MDSTformat. Turbo events are also classified into streams for an easier access at analysis stage.

In Run 2, LHCb provides the stripping output in about ten streams. In the LHCb Upgradethere will be an increase of a factor five in instantaneous luminosity and a further factor of twoor more in trigger selection efficiency for a large fraction of the physics channels. Consequently,the number of streams needs to scale up by roughly one order of magnitude, in order to keepthe size of the datasets to be processed in any analysis at a manageable level.1

The logical workflow for the Upgrade offline data processing is described in Fig. 3.2. TheTurbo, FULL and TurCal streams are exported from the pit. One copy of all data will be storedat the CERN tape system. One additional copy of the FULL and TurCal raw data be stored atanother Tier 1 tape system. All data are also copied to intermediate buffer disk storage. Dataare then immediately processed by the appropriate stream dependent applications, as previouslyexplained, and saved on disk. The Turbo data will be simply reformatted and put onto diskstorage and as a second copy on a Tier 1 tape system. The data of the FULL and TurCal streams

1Currently, a typical stream amounts to 60 TB/year of data. A typical user processing time to analyse astream is of the order of a few weeks.

8

Page 21: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Figure 3.2: The LHCb offline data processing workflow.

will be further reduced by a stripping step which reduces the event size and performs a furtherevent selection before storing the events on disk. An average event retention of 70% is obtainedin Run 2 and it is assumed to be increased to 80% for the Upgrade. This stripping step is largelysimilar to the selections implemented in the Turbo stream and implements the Turbo persistancemodel, but has the advantage to allow for further re-processing. The first stripping pass happenssynchronously with data taking and can be pre-scaled if needed. Two replicas of the data will bekept on disk after this first processing pass. A second processing pass (re-stripping, implementingupdated selections) is typically performed after the data taking period, usually during the wintershutdown. When the second processing pass has been performed, the number of copies of theprevious processing saved on disk can be reduced. One copy of each stripping pass is kept alsoon tape archive.

3.4 User analysis

In the streaming scheme described above each user typically will run on 1–2% of the whole dataset (see Chap. 4. In order to avoid bottlenecks due to each user chaotically running jobs on theirfavourite stream, the data processing for user analysis will be organised in centrally-managedproductions (working group productions). Each production will roughly correspond to theanalyses performed in one of the about ten LHCb physics analysis working groups, as discussedin Ref. [3]. The possibility to implement a train model, where several working group productionsare chained together, is also envisaged. In addition, users will be allowed to submit jobs to offlineresources, using the Ganga framework [18], for analysis prototyping and testing purposes, torun parametrised pseudoexperiment simulations, and to perform fits or other further stages ofthe analysis.

3.5 Simulation

The production of Monte Carlo events in the Upgrade is described in detail in Ref. [3]. Thesimulated events are produced by two applications, Gauss [19] and Boole [20], taking care ofthe event generation and propagation through the detector, and of the digitization, respectively.

9

Page 22: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Figure 3.3: The LHCb event simulation flow.

In Run 2 LHCb has already introduced several Monte Carlo simulation techniques which arefaster than the standard full Geant4-based simulation. The methods which are already in useare briefly discussed below. The ReDecay [21] simulation is a technique where the signal decayis simulated for each event but the same underlying event is reused multiple times. A factor often speedup is obtained with this technique. In the TrackerOnly method only the simulationof the tracking detectors is performed, with a speedup factor of around ten. Finally, with theRICHless method, the simulation of the RICH detectors is switched off and a speedup factor oftwo is obtained. Further fast simulation techniques are currently being developed, includingthe use of calorimeter shower libraries and a fully parametric simulation based on the Delphespackage [22].

The logical simulation workflow in the LHCb Upgrade will be very similar to the one usedfor Run 1 and Run 2 and is summarized in Fig. 3.3. A number of steps are run in sequence.The intermediate files created at the end of each step are transient and deleted when no longernecessary. The only notable exception to this workflow is the fully parametric simulation wherethe data are saved directly in the form of high-level objects that are ready to be used in physicsanalysis. In this case, the digitization, trigger emulation and event reconstruction steps areskipped.

10

Page 23: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 4

Evolution of the trigger strategy

As discussed in Ref. [3], many of the concepts that will be used in the upgrade real-time analysisstrategy have been already successfully exploited during Run 2. Therefore, the volume of datawritten out by the LHCb trigger during Run 2 represents an excellent starting point to estimatethe overall computing resources which will be required by the collaboration. The evolution of thepresent trigger output bandwidth to the Upgrade conditions is described in Sect. 4.1. Differentoutput bandwidth scenarios are described in Sect. 4.2 and their associated risks discussed inSect. 4.3. The physics arguments that support the proposed trigger output bandwidth scenariosare further elucidated in Sect. 4.4

4.1 Extrapolation of Run 2 trigger output to the Upgrade

The event sizes, event rates and throughput to tape of the three streams, FULL, Turbo andTurCal, during 2018 are given in Tab. 4.11. The Turbo event size is the average measured duringRun 2. The FULL stream dominates the output bandwidth to disk but about 32% of the physicsevents (i.e. excluding the TurCal events) are already processed through the Turbo stream, wherethe event size is about 50% of that of the FULL stream events.

Table 4.1: Event size, event rate and throughput to tape for the FULL, Turbo and TurCal streams during2018

stream event size event rate rate throughput bandwidth(kB) (kHz) fraction (GB/s) fraction

FULL 70 7.0 65% 0.49 75%Turbo 35 3.1 29% 0.11 17%TurCal 85 0.6 6% 0.05 8%

total 61 10.8 100% 0.65 100%

According to the studies reported in Refs. [2,23], the LHCb trigger is already signal dominated.Therefore it can be safely assumed that 32% represents the fraction of LHCb Run 2 physicsprogramme relying on the Turbo stream. These figures are a good starting point from which toextrapolate to the Upgrade requirements, if the full breadth of the LHCb physics programme isto be maintained, and possibly expanded. This is even more important owing to the significantduration of LS3 and the competition from Belle 2: it is vital that the collaboration collects asufficiently large and broad data set to keep producing world-leading results throughout the LS3period and beyond.

1Excluding low-multiplicity triggers which are expected to be negligible in the Upgrade.

11

Page 24: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

As described in Ref. [1], the LHCb Upgrade will run at an instantaneous luminosity which isa factor five larger than that of the present experiment. The removal of the first level hardwaretrigger (L0 trigger) will increase by a factor two the trigger efficiency for most of the physicsprogramme [1,2].2 An overall increase of a factor ten in the signal yield per unit time is thereforeexpected for the Upgrade. At the Upgrade luminosity, the pileup will increase roughly by afactor six. However, the collisions which produce a bb or cc pair contribute more to the eventmultiplicity than the average minimum bias proton-proton collisions. Therefore the size of datain the FULL stream will be roughly a factor three larger due to the presence of extra pileupinteractions, as estimated from LHCb Upgrade simulation taking into account that: (i) thesize of data events is larger than that of simulated events, and (ii) that the Upgrade triggeris likely to select preferentially higher occupancy events. Assuming that the Upgrade triggerwill maintain the same signal purity despite the increased pileup, and that instead the UpgradeTurbo stream will not increase in size with increasing pileup, it is straightforward to concludethat the Run 2 selected data rates will scale in proportion to the signal yield, that an UpgradeTurbo event will be 16.7% of a FULL stream event, and that the extrapolated overall outputbandwidth would be 17.5 GB/s in Upgrade conditions.3 This is shown in Fig. 4.1, where theoutput bandwidth is plotted as a function of the fraction of the LHCb physics programme whichcan be performed using Turbo. Without Turbo, maintaining the LHCb physics programme inthe Upgrade would require an output bandwidth of almost 24 GB/s.

Figure 4.1: HLT output bandwidth (in GB/s) as function of the fractional rate of events recorded usingthe Turbo stream.

4.2 Output bandwidth scenarios

The trigger output bandwidth can be optimised on the basis of physics, operational and resourcecost considerations. Such an optimization is performed by varying the relative fraction of eventsprocessed in the Turbo and FULL streams, under the following assumptions. First of all, sincethe trigger performs the full event reconstruction, the data in the FULL stream can be saved in

2The detailed gain depends on the specific decay modes: the factor two is a rough estimation based on theLHCb balance of hardware and software trigger efficiencies.

3The events in the TurCal stream are assumed to scale like those in the FULL stream.

12

Page 25: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 4.2: Output bandwidth scenarios. For each of them, the output data volume, the fraction of physicsevent rate recorded in the Turbo stream (excluding TurCal stream) and the physics channels left in theFULL stream are given.

Output data Turbo Physics channels left in FULL

volume physics fraction

10 GB/s 73% EW, high PT, (semi)leptonic andsome hadronic B-physics, leptoniccharm decays and general LFV searches

7.5 GB/s 87% EW, high PT, some leptonic B-physics, some LFV searches and leptonic searches

5 GB/s 99% None

reconstructed format (RDST files) while raw data can be discarded. In addition, a larger fractionof analyses will be migrated to the Turbo stream. The events saved in the Turbo stream arefully reconstructed, although, in this case, only a signal candidate and possibly the reconstructedparticles associated with its pp collision vertex are saved. A sizeable fraction of FULL streamevents are retained, and stored in reconstructed format. This allows inclusive triggers to continueto be used and leaves room for new analyses after the data are collected, but eliminates the needof CPU for offline reconstruction.

Events selected in the TurCal stream represent a special case. LHCb is a precision physicsexperiment, therefore the size of the calibration sample is expected to increase at least propor-tionally to signal yield in the Upgrade. For these events the full raw event data have to besaved, to allow an accurate understanding of the detector reconstruction and performance, andthe development of new reconstruction and calibration techniques. This is confirmed by theexperience of Run 1 and Run 2, where the calibration samples have continuously expanded inorder to ensure an increasingly accurate understanding of the detector and reduce the systematicuncertainties in the analyses. The aggressive use of fast simulation techniques [2] (see Sect. 6),also requires substantial calibration samples to tune the simulation algorithms.

Within this model, two main parameters can be adjusted to control the average event sizeand, ultimately, the output data volume: moving the FULL triggers to Turbo; and makingTurbo events smaller. As shown in Fig. 4.1, there is a smooth evolution of the output datavolume as a function of the fraction of the physics programme going to Turbo. Therefore threeillustrative scenarios, corresponding to 10, 7.5, and 5 GB/s output data rates are summarised inTab. 4.2. In all cases it is assumed that an Upgrade Turbo stream event is 16.7% of the sizeof an Upgrade FULL event, an improvement of a factor three with respect to the Turbo/FULLratio in the present LHCb. These scenarios imply that at least 60% of the trigger selections (ortrigger lines) will have to be migrated to the Turbo stream. This assumes many improvementscompared to today, in particular in the storing of additional event information (such as particleisolation and flavour tagging) associated with each candidate . In addition there are significantoperational and technical challenges associated with this transition to Turbo. For example, formodes involving neutral particles, detailed understanding of the calorimeter performance inRun 3 will be needed. However these can be considered commissioning difficulties and do notdrive the resources required for data processing in the steady-state.

The scenarios summarised in Tab. 4.2 are chosen by extrapolating from the physics content ofthe Turbo and FULL streams in the Run 2 [24]. It is assumed that all flavour physics channels notinvolving beauty physics will be migrated to the Turbo stream in the Upgrade. This assumptionis motivated by the abundance of charm and light-quark signals which are partially containedin LHCb’s non-hermetic detector geometry. An inclusive selection of these events is essentially

13

Page 26: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

impossible, as signatures like displaced vertices cannot be used for charm or lighter hadrondecays. It is therefore natural to move these selections to Turbo where specialised exclusiveselections can be efficiently implemented. The rate of beauty physics is, on the other hand,an order of magnitude smaller and kinematics allows a much stronger discrimination, whichmakes it possible to achieve a reasonable retention with inclusive selections, and thus keep adegree of safety to recover from mistakes and flexibility to develop new analysis ideas as Run 3progresses. Similar arguments also apply to electroweak and high-pT physics programmes, and tomulti-lepton signatures (both from beauty and lighter particle decays) which may be particularlyimportant for the lepton universality and lepton flavour violation searches in Run 3.

Because of the above arguments, the 10 GB/s scenario is considered as the baseline. Itassumes that 60% of Run 2 FULL stream selections are migrated to the Turbo stream whileleaving the remaining 40% trigger lines, corresponding to Run 2 inclusive beauty selections,in the FULL stream. According to Tab. 4.1, the latter amount to a rate of about 3 KHz, asalso discussed in more detail in Ref. [24]. This scenario will therefore allow a substantial rateof inclusive triggers, in particular for electroweak physics, high-pT searches, and inclusive bdecays. This scheme enables the LHCb Upgrade to continue with the Run 1 and Run 2 LHCbphysics programme, while at the same time, leaving enough flexibility to address unforeseendiscoveries or analysis ideas. Along these lines, the 7.5 GB/s scenario would limit this flexibilityand require moving most of the beauty-physics to the Turbo stream. The 5 GB/s scenario wouldrequire to perform 99% of our analyses using Turbo. Under the above assumptions, the Run3throughput to tape of the three main streams (FULL, Turbo and TurCal) is given in Tab. 4.3for the baseline scenario. Close to 60% of the bandwidth to tape will be for the FULL stream,although it represents only about 25% of the event rate.

Table 4.3: Extrapolated throughput to tape for the FULL, Turbo and TurCal streams during the Upgrade,in the baseline scenario.

stream rate fraction throughput (GB/s) bandwidth fraction

FULL 26% 5.9 59%Turbo 68% 2.5 25%TurCal 6% 1.6 16%

total 100% 10.0 100%

However, as discussed in Sec. 3, a further combined offline event selection (an 80% retentionfactor is assumed) and size reduction is expected to reduce the average event size of the FULL

and TurCal streams on disk to a size similar to that of the Turbo stream. The stripping consistsin running selections similar to those used for the Turbo stream and implements the Turbo

persistence model. This scheme allows for reprocessing of the FULL and TurCal streams savedon tape and will facilitate potential migration of some of the selections from FULL to Turbo.

The throughput to disk in the Upgrade, after the offline processing of FULL and TurCal

streams is given in Tab. 4.4.The total throughput is considerably reduced to less than 4 GB/s and the FULL stream

bandwidth relative weight drops down to 22%.The the flow of data throughput from trigger to disk storage is graphically summarised in

Fig. 4.2.

14

Page 27: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 4.4: Extrapolated throughput to disk after offline processing, for the FULL, Turbo and TurCal

streams during the Upgrade, in the baseline scenario.

stream throughput (GB/s) bandwidth fraction

FULL 0.8 22%Turbo 2.5 72%TurCal 0.2 6%

total 3.5 100%

4.3 Risk considerations concerning output bandwidth scenarios

As discussed in the previous section, the baseline 10 GB/s scenario implies the relatively aggressiveassumptions that the LHCb signal purities and event compression abilities are maintained in theUpgrade, notwithstanding the growing pileup. Therefore, any scenarios of reduced bandwidthcarry great technical risks. Compression needed to move from FULL to Turbo streams oradditional compression of the Turbo stream events would rely on aggregating event informationin an ever smaller number of high-level features, and there is no proof at present that thiswould not introduce uncontrollable correlations or biases in the physics observables of interest.Reducing the output bandwidth will reduce substantially any flexibility to allow for increasingtrigger efficiencies and for unforeseen discoveries or analyses. The assumed factor two increasein the trigger efficiency obtained from the removal of the present L0 trigger is conservativeand based on extrapolations for b-physics. For charm and most other Turbo stream physics,the potential gains are much larger due to the relaxation of minimum pT thresholds. This isillustrated in Fig. 4.3 which shows the L0 hadron trigger efficiency relative to a high-purityoffline selection for a variety of charm signals. Most charm signals have efficiencies between 10and 20% in Run 2 and so factors of up to five could in principle be gained. This additionalefficiency would roughly require an additional 2–3 GB/s of output rate, including a significant

Figure 4.2: Data flow at the Upgrade in the baseline 10 GB/s throughput scenario. Left: event rates.Right: throughput in GB/s. Box widths are proportional to the corresponding quantities, according toTabs. 4.3 and 4.4

15

Page 28: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

contribution from low-pT calibration samples.

Figure 4.3: Hadronic final state trigger efficiency for different charm decays, measured using a tag-and-probe method in data. The efficiency is given relative to high purity offline selections for each decaymode. The label “run block” refers to different data taking periods during Run 2. More details abouthow the efficiency is measured can be found in Ref. [24].

4.4 Analysis case studies

In an LHCb physics analysis, signal candidates analysed offline are typically required to havebeen selected by at least one trigger selection in HLT2. This explicit linking of offline and onlinecandidates allows for the set of all selection requirements to be clearly defined so that theycan be characterised for systematic study. Events captured by an HLT2 trigger selection areset offline in either the FULL stream or the Turbo stream, as discussed in Chap. 3, and hencedifferent measurements have access to different levels of information. Which stream is chosen fora trigger selection depends on the needs of the analyses that will use candidates selected by thattrigger. As inclusive triggers are used by many analyses encompassing a large set of needs, theyare sent to the FULL stream, whereas exclusive triggers are more suitable for sending events tothe Turbo stream. In this section, case studies of three physics analyses are presented to showhow such stream choices are made, highlighting the potential savings and trade-offs. The correctchoice is that which minimises computational resources, as the average event size in the upgradeTurbo stream will be around one-sixth that in the upgrade FULL stream, but maximises physicspotential and reach, which is most safely done by storing as much information as possible.

The FULL stream persists the full HLT2 reconstruction for each event, whereas events sent

16

Page 29: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

to the Turbo stream typically contain only a small subset of the event information used andcreated in HLT2. The exact subset of information sent to the Turbo stream is defined in aselection-specific way. In all cases, the high-level, reconstructed objects from the output of anHLT2 selection are persisted, which comprises track, calorimeter, and particle identificationobjects, fitted decay vertices, and all fitted primary vertices. Most selections sent to the Turbo

stream do not request any additional information, however other objects from the reconstructioncan be requested for persistence after a trigger selection has passed. The set of objects canbe defined in an arbitrary way, and may or may not relate directly to the selection candidateitself. Examples include: every track that lies within a cone centred on the trajectory ofthe candidate; all reconstructed photons; or pile-up suppression, where only objects clearlyidentified as originating from the same primary vertex associated to the signal candidate arepersisted. A trigger selection can request that all objects created in the HLT2 reconstructionare persisted, corresponding to an Upgrade FULL stream event, and they may also request forspecific subdetector raw banks, e.g. those produced by the vertex locator, or those produced bythe electromagnetic calorimeter. This remarkable level of granularity is already available in theTurbo stream, having been gradually developed and deployed over the course of Run 2, and assuch is able to support the full breadth of the LHCb physics programme. The following threecase studies illustrate how different analyses can exploit the granularity in event information tobalance physics goals with output bandwidth.

4.4.1 Charm CP violation

The production rate of charm hadrons is around 20 times larger than that of beauty hadrons,and so in the Upgrade it is not considered feasible to run an inclusive trigger selection for charmthat will fit within any reasonable trigger output bandwidth scenario. In 2018, around 240exclusive HLT2 selections reconstruct charm decays, with over 70 % of these sending events tothe Turbo stream. This strategy will continue in to the Upgrade, albeit with an even largerTurbo event fraction.

The suitability of charm decays for the Turbo stream is down to two factors. Firstly,backgrounds to charm analyses are dominated by random combinations of other objects in theevent as well as other charm decays, both of which can be suppressed by particle identificationrequirements. This negates the need for isolation criteria in almost all charm analyses, andthe large charm production rate relaxes the need for developing the most efficient possibleselection. Secondly, at LHCb charm flavour tagging is universally performed by reconstructing aflavour-specific decay of a heavier state, such as D∗+→ D0π+ or a semileptonic B decay. Dueto this, no additional information is required for performing CP violation studies beyond thecharm candidate itself and the tagging particle, both of which form part of the trigger candidate.This is in contrast to the beauty meson case, discussed in Sect. 4.4.2, where the flavour of thebeauty meson is inferred from other information in the event.

The most precise measurements of CP violation to date have been made using Turbo streamcandidates [25]. Important inputs to such measurements include the charged pion and kaondetection asymmetries. These can be determined using control samples of charm hadron decays,and in Run 2 these samples only required information pertaining to the fully reconstructedcharm candidate decay selected in HLT2. Given the large production rate of both the charmsignal and charm control modes, the ability to compress the event size of charm selections usingthe Turbo paradigm is crucial for the continuation of the charm physics programme into theUpgrade, with its viability having been demonstrated in Run 2.

17

Page 30: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

4.4.2 Inclusive beauty

The relatively low rate of beauty hadron production in comparison to charm hadron productionpermits an inclusive beauty trigger in the Upgrade. Capturing even the known set of beautyhadron decays with exclusive trigger selections is challenging due to their large number. Inaddition there are many decay modes not yet discovered, some of which are not currently underexperimental focus but may be later. For these reasons, LHCb employs a set of inclusive triggerselections to capture reconstructed decay topologies that are likely to belong to true beautydecays. These selections have proven to be a cornerstone of the heavy-flavour programme inRun 1 and Run 2

By definition, inclusive selections may not use all the reconstructed information of thedecay, and so during both Run 1 and Run 2 the inclusive beauty triggers sent events to theFULL stream. Exclusive decays are then fully reconstructed offline, with regular reprocessingsgiving access to decay channels not previously considered. Analyses relying on the topologicalselections include measurements of the CKM angle γ, which reconstruct beauty final statescontaining open-charm hadrons, and measurements of direct and mixing-induced CP violation,which reconstruct charmless three-body decays (see Refs. [26–29] for examples).

At the start of Run 3, the inclusive beauty trigger will continue to send events to the FULL

stream. This is motivated by the needs to be able to develop new analyses using the data alreadycollected and to be able to develop and refine crucial isolation and flavour tagging selections.Unlike in the charm case, reducing the average event size through pileup suppression is morechallenging with an inclusive selection as the pointing direction of the heavy-flavour hadron isnot known. Storing events in the RDST format will allow for detailed event size reduction studiesto be performed on real data, opening the possibility of implementing selective persistence in theinclusive triggers. As analyses mature, exclusive selections can be implemented in HLT2, whichhave rates orders of magnitude lower than the inclusive triggers, with subdetector raw banksrequested as necessary. The following section gives an example of how selective persistence canbe employed, and how this could be applied to the inclusive selections.

4.4.3 Excited charm spectroscopy

Excited charm hadrons are reconstructed via their decays down to ground-state charm hadrons,such as D∗+→ D0π+ and Λc(2940)+→ Λ+

c π+π−. Each ground-state hadron, D0, D+, D+

s ,or Λ+

c , is only reconstructed through one or two low-multiplicity, Cabibbo-favoured decays toprotons and charged pions and kaons. These decays have large branching fractions and aresimple to select cleanly, maximising sensitivity to excited states. In 2016, the trigger selectionsof these ground-state decays were sent to the Turbo stream with full reconstruction persistenceenabled, allowing the ground-state hadrons to be combined with arbitrary additional objects inthe event offline. This saved output bandwidth in comparison to saving the full raw event, butmuch of the information in the event was not used for subsequent analysis, such as tracks thatdo not form a good-quality vertex with the ground-state charm hadron.

Since 2017, additional objects in the event have been selectively persisted. Tracks (electrons,muons, protons, and charged pions and kaons), K0

S meson candidates, and Λ baryon candidates areonly persisted if they form a good-quality vertex with the ground-state charm hadron candidate.Neutral particles reconstructed using information from the electromagnetic calorimeter (photons,neutral pions, and η mesons) are only persisted if the invariant mass of the neutral–charmcombination is no more than 885 MeV/c2 above the production threshold, aligned with previousselections of excited charm states. The reduction in persisted information corresponded to a66 % decrease in average event size for events selected by the ground-state triggers. As theproduction rate of ground-state charm hadrons decaying to Cabibbo-favoured final sates is large,this led to a significant saving of the overall bandwidth of the Turbo stream. This permitted the

18

Page 31: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

addition of new trigger selections, expanding the physics reach of the experiment. The typicalevent selected by the ground-state trigger lines used for spectroscopy is around five times largerthan events selected by trigger lines used for charm CP violation studies, and due to their highrate the former dominates the average Turbo stream event size.

It is foreseen that the selective persistence strategy for studies of excited charm states willcontinue in the Upgrade. As the technique is so flexible, it is also expected that several othertrigger selections currently sent to the FULL stream today will move to selective persistence inthe Turbo stream in the Upgrade. Exploratory studies have shown that selective persistence canbe applied to the inclusive beauty lines with over a 90 % efficiency in signal retention, whilstrejecting over two-thirds of the reconstructed objects in the event. This can be implemented bytaking the selected two- or three-body inclusive ‘seed’ vertex and persisting all other objectsin the event that a multivariate algorithm determines to be an associated heavy-flavour decayproduct. Detailed studies are however still needed to understand any kinematic or geometricbiases which this selection may introduce, particularly in light of the statistical sensitivity ofthe upgrade. Where biases are shown to be under control, and as our understanding of triggerselections sent to the FULL stream improves throughout Run 3, it is foreseen that certain inclusiveselections can be migrated to the Turbo stream with selective persistence. This might proveparticularly important if charm trigger efficiencies improve by more than the factor two assumedin our extrapolations.

19

Page 32: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 5

Resource provisioning

The computing resources needed for operating the offline computing infrastructure are describedin this chapter. They include the resources used for the services running the LHCb distributedcomputing infrastructure, the resources provided by the Worldwide LHC Computing Grid(WLCG), and other computing resources which are given by their providers on a voluntary basisor used ad-hoc in an opportunistic way.

5.1 Computing infrastructure

As discussed in detail in Ref. [3], the framework to operate the workload management and datamanagement on the distributed computing infrastructure is provided by the DIRAC [30] frame-work and its experiment-specific extension LHCbDIRAC [31]. The LHCbDIRAC infrastructurerelies on database backends and services. The databases are, and will continue to be, providedby the CERN/IT database infrastructure (database on demand [32], Oracle). The services runon a dedicated computing infrastructure also provided by CERN/IT. The engineering to scaleup this infrastructure to the operational levels needed by the LHCb Upgrade is also described inRef. [3].

To interact with virtualized computing resources LHCb is developing two systems which arecurrently used also by other Virtual Organizations in various countries. VAC is a self-organizingsystem [33], which can deploy payloads encapsulated within virtual machines on hypervisornodes. The nodes do not need to be orchestrated by an Infrastructure-as-a-System (IaaS), likee.g. Openstack,1 as they communicate among each other to spawn more virtual machines or tearthem down depending on the load of the system. In order to interact with IaaS infrastructuresLHCb uses the vCycle system [34] which follows the same paradigm as VAC but can interactwith IaaS through standardized interfaces such as Amazon EC2. LHCb uses CernVM [35] imagesas virtual machines which are fully integrated with the CernVM file system to deploy bothoperating system images and updates as well as the experiment specific application software andconditions data.

In addition to LHCbDIRAC, LHCb also relies on several other services which are used tooperate the distributed computing infrastructure:

• a file system which provides the possibility to easily deploy application software releases,as well as other application related data such as detector conditions;

• a file transfer service to manage movement and replication of data between storage sites aswell as interaction to and from tape storage;

1http://www.openstack.org/

20

Page 33: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

• an authentication service to handle credentials of users and to check their validity tooperate on distributed computing infrastructure;

• services to interact with batch or cloud systems for pilot submission to these resources andfor their monitoring;

• a service and a protocol to interact with disk storage resources in a coherent way acrossthe whole infrastructure;

• a ticketing service to interact with resource providers in case of issues and to provide followup on those;

• a management and operations team to coordinate the grid-wide coherent interactions withresources, deployment of new resources and decommissioning of services if needed.

All the above services are currently provided by external parties and LHCb will continue torely on them, or their updated versions, for future operations.

5.2 Pledged computing resources via WLCG

The provision of resources for the computing infrastructure provided by the Worldwide LHCComputing Grid (WLCG) follows a pledging scheme where computing sites provide a dedicatedamount of resources to the experiments. The pledged resources are based on requests submittedby the experiments for the forthcoming years and accompanying resource usage reports. Bothdocuments are provided twice a year to the relevant funding bodies.

The WLCG infrastructure is setup in tier levels. The Tiers used by LHCb are the Tier 0at CERN, major Tier 1 sites in several countries and approximately ninety additional Tier 2sites both in countries with Tier 1 centres and in other countries. The computing and storageresources, especially in North America and Asia, are expected to increase until the start ofRun 3.

The Tier 1 sites foreseen to be used by the LHCb Upgrade are:

• IN2P3-CC (Lyon / France)

• FZK-T1 (Karlsruhe / Germany)

• CNAF-T1 (Bologna / Italy)

• NL-T1 (Amsterdam / The Netherlands)

• PIC (Barcelona / Spain)

• RAL-T1 (Rutherford / UK)

• RRCKI-T1 (Moscow / Russia)

CPU resources are provided on all Tier levels. Tape storage is only provided at Tier 0 andTier 1 sites. The usage of disk on a limited number of Tier 2 sites (see Chap. 2) will continue.Limiting the storage resources (tape and disk) to a restricted number of sites has proven to be asuccessful operational model and will also be continued during the LHCb Upgrade.

The relaxed usage of the MONARC model already mentioned in Chap. 2 will also continueafter the Upgrade. Using the mesh processing paradigm [7], the so called Tier 2 “helper sites”are attached to one or more storage sites. The helper site receives a payload, downloads theinput data from the remote storage site, processes the files locally and subsequently uploads the

21

Page 34: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

output data again to the same storage site from which the input data were downloaded from.This concept will be used for data reprocessing campaigns to increase the throughput but alsoduring prompt processing in case the Tier 0 and Tier 1 sites do not have enough resources tocope with the load. The Gaudi federation concept [36] is used to read input data from otherthan the initially foreseen storage sites. Within this scheme, an application is deployed withadditional information on the location of all replicas of all needed input data files. If the firstpriority copy of a file is not readable (for example because the file is corrupted or the disk storageis not available), the application searches over the WAN for remote replicas of the input fileaccross the federation, and reads data from there. This concept is especially useful for useranalysis files where multiple replicas can be available.

Data handling and data replication traditionally follow a “democratic” principle where dataare replicated over all possible storage sites depending on the available space and the capacity ofthe corresponding site. For detector data, the smallest block to be replicated is represented bythe files corresponding to one detector run, i.e. maximum one hour of data taking. Derived datasets are also kept on the same storage site. In case of data replication all descendant files fromone run are also replicated. Intermediate files of a simulation job (typically executed on a Tier 2site) are stored on a topologically close Tier 1 site and deleted after they have been processed bythe corresponding application. The final simulated event files, ready for user analysis, are alsouploaded to a topologically close Tier 1 site and then replicated following the democratic datareplication policy. This principle has proven to be successful, as it made the data set handlingoperations easier and allowed to optimise the load of applications using the distributed data.This mechanism will continue to be used for the LHCb Upgrade.

The LHCb Upgrade will continue to mainly rely on the computing resources provided byWLCG. Storage resources (tape and disk) are currently almost only provided via the WLCGpledging process. Storage will continue to be used only on WLCG resources for data samplescentrally managed by the experiment. Setup and operation of large storage resources are indeedhighly complex, thus the standardised infrastructure provided by WLCG is preferred over othersolutions. Should WLCG extend their storage possibilities, e.g. via the DOMA project (see forexample Ref. [37]), those resources will naturally also be considered for use in the future.

5.3 Non-pledged computing resources

In addition to the CPU resources pledged via WLCG, LHCb also uses several additionalcomputing resources in an opportunistic and/or ad-hoc way. They come from two differentsources: the LHCb online farm and opportunistic resources not owned and not under the controlof the experiment.

5.3.1 Online farm

The LHCb online (or HLT) farm, located near the detector at LHC Point 8, currently providescomputing resources which are used for running the trigger and alignment applications duringdata taking periods. Outside these periods, when otherwise idle, the farm is used by LHCb foroffline data processing with a substantial number of simulation jobs being deployed there.

During regular data taking periods, the online farm is solely used for the trigger applications.However there are several running conditions where, due to the low luminosity, the full computingpower of the HLT farm is not needed. They include, for example, intensity ramp phases orheavy ion runs. In these periods the online and offline applications, essentially simulation jobs,can be run concurrently. This is possible thanks to the fact that trigger and offline applicationsuse the same hardware infrastructure, without any intermediate virtualization layer and to theimplementation of a “fast stop” mechanism that allows to switch between offline and online

22

Page 35: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

usage of the farm within a reasonable time of about 1–2 hours2. With this mechanism a signalis sent to the application which will finish processing the current event and then tear down theapplication gracefully and upload the output data within minutes. In principle this mechanismcan be used with any LHCb application running within the Gaudi framework, but from anoperational point of view currently this only makes sense for simulation jobs, where the amountof processed events by a single application is irrelevant.

The same concepts will continue to be applied in the Upgrade. Although the upgraded HLTfarm will be substantially more powerful than the current one, it will be nevertheless much moreloaded, given the much higher input rate [2]. This will limit its availability for offline applicationsto the LHC winter shut down periods.

While in the current HLT farm there is no mechanism to access the locally deployed storagefor offline usage, the possibility to access these resources, when idle, via a common namespace isbeing considered for the Upgrade. Notably, during the LHC winter shutdown periods, analysisdata could be stored on online farm disks which, together with the CPU, could provide additionalcapabilities for user analysis jobs to be executed.

5.3.2 Other opportunistic resources

LHCb also uses opportunistically computing resources that it does not own. These includeHigh-Performance Computing centres (HPC) and resources hosted on WLCG sites that are notpledging to LHCb. In both cases the main use is for Monte Carlo simulation, as it does notrequire input data. The amount of work that can be performed with these CPU resources isunpredicatble as the priority given to the LHCb applications compared to other users is lower,and LHCb jobs are essentially used to fill otherwise unexploited CPU time.

While deployment of LHCb applications on WLCG sites via standardised interfaces to thelocal batch system and a standardised infrastructure is easy to setup, on high performancecomputing centres the deployment of LHCb simulation software is harder to do. This stemsfrom the requirements of the experiment to execute the applications which include outboundinternet connectivity from the batch worker nodes, the deployment of CVMFS for applicationand conditions data, the computing hardware to be compatible with Intel architecture, and astandardised interface to the batch system for the deployment of DIRAC pilot jobs. For caseswhere these standards are not satisfied, solutions can be found, but experience shows that theapplication deployment requires substantially more work. A number of engineering efforts areunderway, such as the porting of the software stack to non-Intel architectures, that may help tofurther relax the standard requirements for non-WLCG compliant computing centers, althoughit is unlikely they will be completely removed by the times of Run 3. The amount of availableresources will need to be evaluated against the additional effort to exploit them.

A further source of opportunistic resources is the BOINC [38] infrastructure. The BOINCsystem allows individual users to run LHCb jobs on their private computers such as homeinstitute office computers or laptops. Again the main use of these resources is for simulation jobs,which are shortened in execution length in order to allow them to finish within one session. In thecase of disconnection (e.g. the user closes his laptop lid) the connection to the application andthe produced data is currently lost. As there cannot be a trust relationship to BOINC-providedresources also the security constraints and data scrutiny measures needed are significantly higher.The amount of work obtained from outside the experiment via this resource is very little, andit is best regarded as an outreach activity. The continuation of BOINC during the upgradeneeds to be evaluated versus its operational and development efforts. Resources which could be

2Notice that without this mechanism, the stop of the offline applications to resume the trigger activity had tobe planned at least two days in advance, as the only available procedure was to wait for the typically 6-hours longsimulation jobs.

23

Page 36: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

exploited via BOINC but are under the control of the experiment or a collaborating institute,such as local batch clusters, can also be exploited with alternative mechanisms, e.g. via VAC(see Sect. 5.1).

LHCb plans to continue to use computing resources from non-pledging sites as well asopportunistic and ad-hoc resources during the Upgrade era, and expects those to increase in thefuture.

24

Page 37: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 6

Resource requirements

This chapter describes the resource requirements of the LHCb Upgrade for the LHC Run 3and Long Shutdown 3 (LS3) periods. The same model will be replicated for Run 4 and LS4,although upcoming new technologies and computing models that will presumably be developedin the context of high-luminosity LHC upgrades of ATLAS and CMS experiments, may bringfurther optimisations. The LHCb Upgrade needs for CPU, disk and cold storage (e.g. tape) arepresented. A baseline scenario is given assuming standard data-taking conditions and a specificoperational model. Additional options, where some of the baseline assumptions are altered tofurther reduce the resource requirements are given. For each of the alternative scenarios thepotential losses or risks for the physics programme as well as impact on operation complexityare described. As discussed in Chap. 4, the resource needs for storage and CPU are mostlydecoupled and hence are discussed separately for each case.

6.1 Common assumptions for the resource requirement model

In all models for the estimation of the resource needs described below, a set of commonassumptions, except where explicitly mentioned, are considered, as described in the followingsections.

6.1.1 Luminosity

The LHC is assumed to provide 5 million seconds of collisions per data-taking year. Theintegrated luminosity collected by the experiment per full data-taking year is expected to be10 fb−1. The parameters for the machine in the first year of Run 3 are not yet known. It istherefore assumed that for the first year of data taking (2021) the commissioning of the machinewill result in half of the beam time and collected luminosity of a standard operational year.

6.1.2 Monte Carlo simulation

On the basis of the experience made during Run 1 and Run 2, it is expected that the productionof Monte Carlo events will dominate the CPU needs, at the level of about 90% of the totaloffline data processing resources. The simulation data will be mostly saved in MDST format,with substantial storage space savings.

The requirements in terms of number of simulated events have been estimated by studyingprevious Monte Carlo productions. In Tab. 6.1 the number of simulated events per year and therecorded luminosity corresponding to 2015 and 2016 data taking are listed. It can be shownthat the amount of simulated events per year per recorded pb−1 per calendar year is constantat 2.3 ∗ 106. In the following, the resource needs for CPU work are therefore assumed to be

25

Page 38: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 6.1: Simulated events per fb−1 per calendar year, corresponding to the 2015 and 2016 data takingperiods. The number of simulated events per year, the recorded luminosity and the simulated events perrecorded fb−1 per year are shown

2015 2016

simulated events/year (×109) 0.7 3.7Recorded luminosity ( fb−1) 0.3 1.6Simulated events/ fb−1/year (×109) 2.3 2.3

dependent only on the recorded integrated luminosity, and independent of the trigger selectionsthat are applied.

Figure 6.1: Number of simulated events for a given data-taking year produced in each calendar year.

The production of simulated events needed to represent a given data-taking year is observedto be completed after six years, as shown in Fig. 6.1 where the number of simulated eventsgenerated for a given data-taking year as a function of calendar year is plotted. It can beseen that the production starts already during a given data-taking year but ramps up only inthe following year. The peak of simulated events corresponding to a given data-taking year istypically reached only four years later, with the number decreasing over the following two years.

In the following sections it is therefore assumed that the simulation of Run 2 events willphase out during the years 2021–2023.

The Monte Carlo production is expected to make extensive use of the fast simulation optionsdescribed in Sect.3.5. Still, a certain fraction of events will need to be produced through a fullGeant4-based simulation (full simulation). In the following sections, it is assumed that theMonte Carlo production will be subdivided in 40% of full simulation, 40% of fast simulations,and 20% of parametric simulations. This is an aggressive assumption that depends on the

26

Page 39: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

success and adoption by users of developments that are currently being put in place. In Run 2the faster and parametric simulation account for less than 5% of generation as these methodsare newly developed. Some fast simulation methods have now been validated and uptake isincreasing. The time needed to simulate an event is expected to be roughly 120 s, 40 s and 2 s forfull, fast and parametric simulations respectively, on a standard CPU core. The current valuesto simulate Upgrade events are a factor five above these goals. The generator and detectorresponse steps consume 95% of the time of the whole simulation process. However, we expect torecover this gap through the major ongoing rewriting of the simulation framework and thanks tothe substantially improved efficiency of the latest and upcoming releases of the Geant toolkit.

The simulation framework already allows to apply a final filtering of events as a last step, tofurther save disk space. With this option, the retention rate over all simulated events is currentlyaround 33%. The same reduction rate is assumed to hold also in the Upgrade.

A major change is foreseen for the simulation output data format, with the majority of events(90%) written in MDST format and only the remaining part in DST format. A size reduction bya factor of 40 can obtained between MDST and DST as demonstrated already in Run 2.

6.1.3 Offline data processing

The offline re-rocessing of the FULL and TurCal streams described in Sect. 3.3 requires asignificant reading throughput from tape. The rate measured in a typical two-months longstripping campaign in Run 2, and those required for the Upgrade for a commissioning and for afull data taking year are shown in Fig. 6.2. The needed throughput can be reduced by stretchingthe tape staging over more than two months. However, the re-processing time should not exceedfour months to ensure that it can be completed during an end of year shutdown, and thusnot interfere with data taking. The 2022 end of year reprocessing needs to be scrutinised todetermine if the throughput needs can be met at the sites that provide tape storage, or if a partialre-processing will have to be considered to accommodate for the infrastructure capabilities.

Figure 6.2: Observed tape staging throughput in Run 2 end of year reprocessing campaigns (blue datapoint). The required staging throughput in the Upgrade, for a commissioning year and for a full year, areshown as a function of the length of the reprocessing campaign.

27

Page 40: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

In scenarios where a certain level of data parking is needed, the tape throughput for unparkingthe data will have to be added to that needed for plain data processing. This can be particularlychallenging if unparking needed to allow analysis of the parked data will occur concurrently tothe data re-processing.

6.1.4 Data set replicas

For safety reasons, in general two copies of all data that are impossible to be regenerated willbe saved on tape. Therefore, As described in Chap. 3, two copies of the raw data sets will bestored on tape. An archive copy of the offline processed data for each of the three streams arealso saved on tape. After the offline processing, two copies of the Turbo stream and up to threereplicas of the FULL and TurCal streams will be saved on disk. A single copy of the Monte Carloevents is kept on tape, while two copies of the most popular simulated data sets ( 30% of thetotal) is stored on disk. A summary of data replicas is given in Tab. 6.2.

Table 6.2: Data set replicas per stream and storage media

stream tape disk

FULL 2× RDST + 1× MDST 3× MDSTTurbo 1× TurboRaw + 1× MDST 2× MDSTTurCal 2× RDST + 1× MDST 3× MDSTSimulation 1× MDST 1 × MDST (30% data set only)

6.1.5 Evolution of present resources

At the time of writing this document the 2018 WLCG pledges for LHCb have been confirmed,the 2019 requests have been scrutinized and the 2020 needs have been communicated to theLHC Computing Resource Scrutiny Group and to the LHC Resource Review Board [39–42].The 2019 and 2020 requests are shown in Tab. 6.3. They represent the starting point to evaluatethe resources for the Upgrade period.

In recent years LHCb has been extremely effective in using opportunistic CPU resources asextensively discussed in Chap. 5. These include the use of the online trigger farm and otherresources not under direct control of the experiment. The amount of the latter has to beconsidered as an upper limit as there is no guarantee that these resources can be obtained at aconstant rate throughout the Upgrade period.

6.1.5.1 Evolution of pledged resources

As a comparison scenario for the resource requirements discussed in this documents, an evolutionof the present pledged resources based on the current widely accepted WLCG “constant budget”model is assumed. In this model it is assumed that, at constant budget, the pledged resources

Table 6.3: Requested WLCG Pledges for LHCb in 2019 and 2020. These are currently subject to approvalby the RRB but represent a natural progression from earlier years.

WLCG YearCPU Disk Tape

kHS06 Yearly Growth PB Yearly Growth PB Yearly Growth

2019 529 1.1 49 1.2 86 1.12020 631 1.2 58 1.2 92 1.1

28

Page 41: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

can be increased yearly by 20% due to technology advancements. In the following sectionstherefore, while the resource requirements are given in absolute terms, they are also comparedto the constant budget assumption as a gauging term, although this model may not necessarilyhold at the times of Run 3 and beyond.

6.1.5.2 Evolution of opportunistic resources

The trigger farm will be upgraded in Run 3 and will provide about a factor two more CPU peakpower with respect to the present farm. However it has to be taken into account that the triggerfarm will be available only part of the time during a year. It is assumed that the online farm isavailable only 30% of the time during running years, corresponding to availability during wintershutdowns, and 50% during long shutdowns when major infrastructure interventions take place.These are resources that can reasonably expected to be available.

Conversely, it is not possible to reliably estimate the additional opportunistic CPU resourcesthat will be available. As a guide, an evolution from the present situation for these resources isgiven, assuming they also scale according to the WLCG constant budget model assumed for thepledged resources. There is, of course, no guarantee that this will be the case.

6.1.6 Other common assumptions and units

The average power of a CPU core in a standard worker node is set to 10 HepSpec06. The storageresources are given in petabytes (PB) and the CPU resources in kHepSpec06 (kHS06)

Tape storage is assumed to hold two replicas of the data exported from the HLT and onearchive copy of offline processed data sets where applicable. Copies of LHCb Run 1 and Run 2data on disk will be reduced to one replica. A single copy of popular simulation data will alsobe kept on disk. The Run 1 and Run 2 data on tape, consisting of two replicas for raw data andone replica for processed data sets and simulation data, will stay untouched.

6.2 Baseline resource requirements

To quantify the baseline resource requirements, an output bandwidth after the HLT processingof 10 GB/s is assumed, as discussed in Chap. 4. The full bandwidth is assumed to be stored ontape, while a reduction by more than a factor two is expected for data stored on disk, thanks tothe stripping process of the FULL and TurCal streams described in Sect. 3.3 and 4.2.

6.2.1 Storage resource requirements

The storage requirements per data-taking year are shown in Fig. 6.3 and Fig. 6.4 for disk andtape respectively. The breakdown for data, simulation, and user analysis is given. The currentWLCG pledges for LHCb and their evolution into the LHCb Upgrade years in the constantbudget model are also shown. The corresponding numbers for total resource request and theiryearly growth factors are given in Tab. 6.4.

6.2.2 CPU resource requirements

The requirements for CPU as a function of year are shown in Fig. 6.5 and summarised in Tab. 6.4.The breakdown of the requirements for simulation, and user analysis is given. As discussed inSect. 6.1.2, Monte Carlo production amounts to 90% of the total data processing CPU power,and the processing time for data stripping is expected to be negligible, at the level of a fewpercent of the total. The current WLCG pledges and their evolution in the constant budgetmodel are also given. The two leftmost bars of the plot show the actual 2017 CPU usage and an

29

Page 42: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Figure 6.3: Baseline requirements for disk as a function of year. The breakdown is given by usage type(data, simulated events, user analysis). The current WLCG pledges for LHCb and their evolution to theUpgrade assuming the constant budget model are also shown.

Figure 6.4: Baseline requirements for tape as a function of year. The breakdown is given by usage type(data, simulated events). The current WLCG pledges for LHCb and their evolution to the Upgradeassuming the constant budget model are also shown.

estimate for the 2018 usage. They demonstrate a CPU usage larger by about a factor two thanpledged resources, due to efficient use of opportunistic resources. The evolution of resourcesfrom the HLT farm and from other sources not under control of LHCb are also shown in Fig. 6.5.

Table 6.4: Baseline resource requirements for (left) disk, (centre) tape, and (right) CPU as a function ofyear during Run 3 and LS3. The the yearly growth factors are also shown.

WLCG YearDisk Tape CPU

PB Yearly Growth PB Yearly Growth kHS06 Yearly Growth

Run 32021 66 1.1 142 1.5 863 1.42022 111 1.7 243 1.7 1.579 1.82023 159 1.4 345 1.4 2.753 1.7

LS 32024 165 1.0 348 1.0 3.476 1.32025 171 1.0 351 1.0 3.276 0.9

Average end of Run 3 1.4 1.5 1.6

Average end of LS 3 1.2 1.3 1.4

30

Page 43: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Figure 6.5: Baseline requirements for CPU as a function of year. The breakdown is given by usage type(simulation, user analysis). The current WLCG pledges for LHCb and their evolution to the Upgradeassuming the constant budget model are also shown. The actual CPU usage for 2017 and 2018 is indicatedby the grey bars. The dark blue solid band and the light blue shaded band indicate the evolution of HLTfarm and other opportunistic resources respectively.

6.2.3 Risk analysis

The offline data processing and data management operations, discussed in the previous sections,are similar to those currently in place. Hence the quality of service and risk level are expectedto be similar to those experienced in Run 2. In addition, as discussed in Sect. 6.1.5, theopportunistic resources not under control of LHCb have to be considered as an upper limit andthe corresponding shaded area in Fig. 6.5 represents an uncertainty range.

6.3 Optional mitigation strategies

In the previous section, the resource requests to cope with the expected baseline trigger outputrate were discussed. Potential strategies to optionally mitigate these requirements are describedin the following section. The first considered option is parking of a part of data on tape toreduce the need of disk resources. As a second option, a net reduction of the trigger outputbandwidth, that can be potentially obtained by a more aggressive migration of trigger selectionsto the Turbo stream, is considered.

6.3.1 Data parking

As a first resource mitigation option, parking of a fraction of data is considered. As an illustrativeexample, a scenario in which 20% of the data is not permanently available on disk but must bestaged on demand from tape is considered. A net reduction of disk resources by 20% is obtainedin this way. Data that cannot be deployed on disk for user analysis will be parked on coldstorage resources and staged on disk or activated whenever possible and needed, for example,during winter shutdown periods. The requirements of tape and CPU for this option will remainunchanged with respect to the baseline, therefore only the disk requirements are discussed.

6.3.1.1 Storage resource requirements

The disk and tape requirements for this scenario are shown in Tab. 6.5 for each of the datataking and shutdown years. The yearly growth factors and the average growth factors at theend of Run 3 and LS 3 are also shown.

31

Page 44: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 6.5: Data parking option: requirements for disk and tape as a function of year during Run 3 andLS3. The yearly growth factors are also shown.

WLCG YearDisk Tape

PB Yearly Growth PB Yearly Growth

Run 32021 58 1.0 142 1.52022 95 1.6 243 1.72023 134 1.4 345 1.4

LS 32024 140 1.0 348 1.02025 146 1.0 351 1.0

Average end of Run 3 1.3 1.5

Average end of LS 3 1.2 1.3

6.3.1.2 Risk analysis

In this option, the disk requirements would be reduced in a straightforward way.However, the achieved storage resource reduction would come at a significant risk. As a first

consequence, delays in the data analysis are to be expected, leading to a loss of competitivenessof the experiment. The second consequence is a substantial increase in the operational load andcomplexity, with an unavoidable effect on the quality of service. Furthermore, this model willresult in an increased throughput load on the tape systems, as the staging has to be performedas quickly as possible to allow analysts to exploit these data, potentially concurrently with dataprocessing during winter shutdown. Finally, the whole system would be less resilient againstincidents and prolonged outages of computing centres. As demonstrated by past experience, theeffects of a major outage would be severe in the context of this optional model.

6.3.2 Reduced HLT output bandwidth

As a second mitigation option, a net reduction of the trigger output bandwidth with respect tothe baseline 10 GB/s is considered. This could be achieved by a more aggressive migration ofthe trigger selections to the Turbo stream. As an illustrative example, a fraction of 87% of thephysics selections are supposed to be in the Turbo stream, corresponding to a trigger outputbandwidth of 7.5 GB/s, as shown in Tab. 4.2.

6.3.2.1 Storage resource requirements

The disk and tape requirements for the reduced HLT output scenario are shown in Tab. 6.5for each of the data taking and shutdown years. While the disk requirements stay at the samelevel as the baseline scenario, the tape requirements are reduced. The yearly growth factors andthe average growth factors at the end of Run 3 and LS 3 are also shown. With the selectionsin this scenario moving from FULL to Turbo the tape requirements are reduced while the diskrequirements stay at the same level with even small increases compared to the baseline scenarioin some years.

6.3.2.2 Risk analysis

As shown in Chap. 4, a reduction of the trigger output bandwidth implies a correspondingreduction of the physics programme of the experiment. In this option, the operational model ofthe experiment is unchanged with respect to the baseline scenario and hence a high-quality ofservice can be assumed to be maintained. The CPU requirements will also remain essentiallyunchanged.

32

Page 45: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 6.6: Reduced HLT output bandwidth option: requirements for disk and tape as a function of yearduring Run 3 and LS3. The yearly growth factors are also shown.

WLCG YearDisk Tape

PB Yearly Growth PB Yearly Growth

Run 32021 67 1.1 129 1.42022 114 1.7 205 1.62023 164 1.4 282 1.4

LS 32024 170 1.0 285 1.02025 176 1.0 288 1.0

Average end of Run 3 1.4 1.5

Average end of LS 3 1.2 1.3

6.3.3 Aggressive fast simulation development

As discussed in Sect. 6.1, the CPU usage is largely dominated by the production of simulatedevents. The use of new fast simulation techniques has been already introduced in LHCb, withencouraging results (see Sect. 3.5). The assumptions used to derive the baseline CPU resourcerequirements are given in Sect. 6.1.2 and already imply substantial savings in CPU time persimulated event with respect to the present situation.

In order to mitigate CPU resources, an even more aggressive evolution of fast simulationtechniques and of their usage can be considered. In this option, it is assumed that the R&Don fast simulation will continue during Run 3 and will potentially achieve significant savings inCPU time per simulated event. In addition, it is assumed that the adoption of fast-simulatedevent samples will be more and more widespread among the analysts.

6.3.3.1 CPU resource requirements

The event simulation times assumed in this scenario, as a function of year for full, fast andparametric simulation approaches are given in Tab. 6.7 and shown in Fig. 6.6. As shown inFig. 6.6, the fraction of fully simulated events and the average CPU time per event are those inSect. 6.1.2 and evolve towards a considerably higher fraction of fast-simulated events and shortersimulation times.

The breakdown of the CPU requirements for simulation and user analysis along with theWLCG pledges for LHCb and their evolution to the Upgrade assuming the constant budgetmodel are summarised in Tab. 6.7 where also their yearly growth factors are reported.

6.3.3.2 Risk analysis

This option affects only the simulated event production model and has no effect on the physicsprogramme.

It however implies substantial changes in analysis models, where a much larger use of fastsimulation techniques is required. It entails a systematic use of data driven calibration techniquesthat may potentially require more calibration data and, consequently, more storage (not takeninto account here).

It has also to be noticed that this scenario is highly speculative, as there is no guaranteethat the R&D programme on fast simulation techniques will yield the expected results.

33

Page 46: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 6.7: Aggressive fast simulation development option: Requirements for CPU as a function of yearduring Run 3 and LS3. The yearly growth factors are also shown. The assumed fractions of eventsgenerated, and the average time taken per event, with full, fast and parametric simulations in each yearare indicated.

WLCG YearPercentage CPU

full/fast/parametric Timing (s) kHS06 Yearly Growth

Run 32021 40/40/20 120/40/2 863 1.42022 40/40/20 100/36/2 1.423 1.62023 40/40/20 80/32/2 2.051 1.4

LS 32024 30/50/20 60/28/2 1.844 0.92025 30/50/20 60/24/2 1.542 0.8

Average end of Run 3 1.5

Average end of LS 3 1.2

Figure 6.6: Potential evolution of fast simulation in LHCb. The fraction of simulation events generatedwith full simulation, fast simulation options and parametric simulation as a function of time are shown inthe shaded histogram. The assumed development of the average time per event of the simulation in thesethree cases are shown by the lines.

34

Page 47: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

6.3.4 Mitigation discussion

A range of mitigation scenarios have been considered in this section, and consequent reductionsfor tape, disk and CPU have been shown.

The baseline model for tape storage in the LHCb Upgrade requires an increase in resourcesabove the presumable evolution of the current WLCG pledges according to the constant budgetmodel. Tape is the cheapest of the three resources considered here and the increase is smallcompared with the increase in data volume of the LHCb Upgrade. Still, a mitigation scenariohas been considered. A further reduction in tape resources can be made by reducing thetrigger output bandwidth (see Tab. 6.6). The potential loss to the physics programme would besubstantial and may be considered disproportionate to the modest cost-saving.

As a result of the new computing model the disk resources required for the LHCb Upgradeexceed those expected from the evolution of the current WLCG pledges by far less than would benaively assumed. However, disk is an expensive resource and a mitigation scenario is considered.Disk resources can be reduced by a data-parking strategy. Introducing data parking at the 20%level would bring the required resources to the level of the assumed evolution of the currentlypledged disk (see Tab. 6.5). The load on tape throughput in the baseline model already appearschallenging and will be further increased in this mitigation scenario. Operational complexity willalso be substantially increased. Due to these factors, and the inherent time delay, this scenarioincurs risks to the loss of competitiveness of the experiment.

The LHCb Upgrade plans to make substantial use of fast simulation techniques. Theassessment of the CPU resources available for the LHCb Upgrade experiment is highly dependenton the assumption made on opportunistic resources. If these resources scale in a manner similarto that assumed for the pledged resources the baseline needs of the experiment can roughlybe met under a constant budget assumption. For these to be met with the evolution of thecurrently pledged resources, in the absence of the availability of opportunistic resources, wouldrequire an aggressive development of fast simulation which is highly speculative (see Fig. 6.6and Tab. 6.7).

35

Page 48: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Chapter 7

Conclusion

The LHCb Upgrade experiment will collect approximately an order of magnitude higher signalyields than those that have been recorded by the LHCb experiment. An extrapolation of thecurrent LHCb data rates would yield an increase of a more than a factor 30 in data volume, dueto the higher luminosity, trigger efficiency and pileup. The innovative practices being put in placeby the collaboration significantly reduce the storage and computing requirements of the LHCbUpgrade experiment compared with a naive scaling of those utilised by the LHCb experiment.The resource requirements of the LHCb Upgrade in the baseline model are summarised inTab. 7.1

An optimised computing model has been described in this Technical Design Report. Theexpanded use of the Turbo concept allows the storage resources to be reduced to a manageablelevel. The proposed large-scale use of this innovation represents a major step forward in thehandling of data volumes for the LHCb Upgrade.

Tape storage, which is of comparatively low cost compared to disk, is used to record thebaseline trigger output bandwidth of 10 GB/s. This bandwidth is considered adequate toensure the full exploitation of the LHCb Upgrade physics programme. Disk usage is reduced bymore than a factor of two with respect to tape. A significant investment is being made in theLHCb Upgrade to collect a substantial data sample for the pursuit of a precision flavour physicsprogramme. To realise this potential the availability of adequate storage resources is requiredand have the highest priority.

The CPU power needed for data processing is largely determined by the production ofsimulated events. The number of Monte Carlo events needed is expected to scale with theintegrated luminosity, and consequently substantial CPU power will be needed for the Upgrade.The CPU needed for a full Geant4-based simulation of the required amount of simulated eventswould be unmanageable. The development of new fast simulation techniques foreseen in theUpgrade computing model allows a substantial reduction in the Monte Carlo event processingtime. It is foreseen that additional CPU power will be provided by the HLT farm when idle (e.g.during LHC winter shutdown periods). The use of opportunistic resources, not under the directcontrol of LHCb, will also continue.

Mitigation options have also been considered, which may reduce the storage or CPU require-ments. These options have considerable impact on the physics programme or on the complexityof computing operations and are therefore considered only as backup solutions.

The computing model described in this TDR has enough flexibility to incorporate newdevelopments that may result from the R&D programme launched to deal with the computingfor the upgraded general-purpose detectors ATLAS and CMS in the HL-LHC era. While thetimescale for such developments is necessarily that of Run 4 and beyond, LHCb welcomes anyefforts that might possibly reduce costs and enable optimization already in Run 3 (e.g. theWLCG/DOMA initiative). LHCb encourages these initiatives, is very willing to contribute to

36

Page 49: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

Table 7.1: Summary of the LHCb upgrade computing model requirements. Top section: main assumptionsof the model. Bottom section: resource requirements

Model assumptions

L (cm−2s−1) 2× 1033

Pileup 6

Running time (s) 5× 106 (2.5× 106 in 2021)

Output bandwidth (GB/s) 10

Fraction of Turbo events 73%

Ratio Turbo/FULL event size 16.7%

Ratio full/fast/param. simulations 40:40:20

Data replicas on tape 2

Data replicas on disk 2 (Turbo); 3 (FULL, TurCal)

Resource requirements

WLCG Year Disk (PB) Tape (PB) CPU (kHS06)

2021 66 142 8632022 111 243 1.5792023 159 345 2.7532024 165 348 3.4672025 171 351 3.267

them, and to continue the experiments tradition of being an innovator and early adopter of newcomputing resource practices.

37

Page 50: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

References

[1] LHCb collaboration, Framework TDR for the LHCb Upgrade: Technical Design Report,CERN-LHCC-2012-007.

[2] LHCb collaboration, LHCb Trigger and Online Technical Design Report, CERN-LHCC-2014-016.

[3] LHCb collaboration, LHCb Upgrade Software and Computing, CERN-LHCC-2018-007.

[4] LHCb collaboration, LHCb computing: Technical Design Report, CERN-LHCC-2005-019.

[5] M. Aderholz et al., Models of Networked Analysis at Regional Centres for LHC Experiments(MONARC), Phase 2 Report, 24th March 2000, Tech. Rep. CERN-LCB-2000-001. KEK-2000-8, CERN, Geneva, Apr, 2000.

[6] I. Bird et al., Update of the Computing Models of the WLCG and the LHC Experiments,Tech. Rep. CERN-LHCC-2014-014. LCG-TDR-002, Apr, 2014.

[7] L. Arrabito et al., Major changes to the LHCb Grid computing model in year 2 of LHCdata, J. Phys. : Conf. Ser. 396 (2012) 032092.

[8] G. Dujany and B. Storaci, Real-time alignment and calibration of the LHCb Detector inRun II, J. Phys. Conf. Ser. 664 (2015) 082010.

[9] R. Aaij et al., Tesla : an application for real-time data analysis in High Energy Physics,Comput. Phys. Commun. 208 (2016) 35, arXiv:1604.05596.

[10] R. Brun and F. Rademakers, ROOT: An object oriented data analysis framework, Nucl.Instrum. Meth. A389 (1997) 81.

[11] R. Aaij et al., Selection and processing of calibration samples to measure the particleidentification performance of the LHCb experiment in Run 2, arXiv:1803.00824.

[12] Geant4 collaboration, S. Agostinelli et al., Geant4: A simulation toolkit, Nucl. Instrum.Meth. A506 (2003) 250.

[13] Geant4 collaboration, J. Allison et al., Geant4 developments and applications, IEEE Trans.Nucl. Sci. 53 (2006) 270.

[14] J. Allison et al., Recent developments in Geant4, Nucl. Instrum. Meth. A835 (2016) 186.

[15] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulationframework, J. Phys. Conf. Ser. 331 (2011) 032047.

[16] Moore [software], http://lhcbdoc.web.cern.ch/lhcbdoc/moore, [accessed 2018-11-06].

38

Page 51: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

[17] Moore [software], http://lhcbdoc.web.cern.ch/lhcbdoc/davinci, [accessed 2018-11-06].

[18] R. Currie et al., Expanding the user base beyond HEP for the Ganga distributed analysisuser interface, J. Phys. Conf. Ser. 898 (2017) 052032.

[19] Moore [software], http://lhcbdoc.web.cern.ch/lhcbdoc/gauss, [accessed 2018-11-06].

[20] Moore [software], http://lhcbdoc.web.cern.ch/lhcbdoc/boole, [accessed 2018-11-06].

[21] D. Muller, M. Clemencic, G. Corti, and M. Gersabeck, ReDecay: A novel approach to speedup the simulation at LHCb, arXiv:1810.10362.

[22] J. de Favereau et al., DELPHES 3, A modular framework for fast simulation of a genericcollider experiment, JHEP 02 (2014) 057, arXiv:1307.6346.

[23] C. Fitzpatrick and V. V. Gligorov, Anatomy of an upgrade event in the upgrade era, andimplications for the LHCb trigger, LHCb-PUB-2014-027.

[24] R. Aaij et al., Performance of the LHCb trigger and full real-time reconstruction in Run 2of the LHC, , in preparation.

[25] LHCb collaboration, R. Aaij et al., Updated determination of D0-D0

mixing and CP violationparameters with D0 → K+π− decays, Phys. Rev. D97 (2018) 031101, arXiv:1712.03220.

[26] LHCb collaboration, R. Aaij et al., Amplitude analysis of the decay B0 → K0Sπ

+π− andfirst observation of CP asymmetry in B0 → K∗(892)−π+, Phys. Rev. Lett. 120 (2018)261801, arXiv:1712.09320.

[27] LHCb collaboration, R. Aaij et al., First measurement of the CP -violating phase φdds inB0

s → (K+π−)(K−π+) decays, JHEP 03 (2018) 140, arXiv:1712.08683.

[28] LHCb collaboration, R. Aaij et al., Measurement of CP violation in B0→ D±π∓ decays,JHEP 06 (2018) 084, arXiv:1805.03448.

[29] LHCb collaboration, R. Aaij et al., Measurement of the CKM angle γ using B±→ DK±

with D→ K0Sπ

+π−, K0SK

+K− decays, JHEP 08 (2018) 176, arXiv:1806.01202.

[30] F. Stagni et al., DIRAC in Large Particle Physics Experiments, J. Phys. Conf. Ser. 898(2017) 092020.

[31] F. Stagni et al., LHCbDirac: distributed computing in LHCb, J. Phys. Conf. Ser. 396 (2012)032104.

[32] R. G. Aparicio and I. C. Coz, Database on Demand: insight how to build your own DBaaS,J. Phys. : Conf. Ser. 664 (2015) 042021.

[33] A. McNab, The vacuum platform, J. Phys. Conf. Ser. 898 (2017) 052028.

[34] A. McNab, P. Love, and E. MacMahon, Managing virtual machines with Vac and Vcycle, J.Phys. Conf. Ser. 664 (2015) 022031.

[35] P. Buncic et al., CernVM: A virtual software appliance for LHC applications, J. Phys. Conf.Ser. 219 (2010) 042003.

[36] C. Haen, P. Charpentier, M. Frank, and A. Tsaregorodtsev, The DIRAC Data ManagementSystem and the Gaudi dataset federation, J. Phys. : Conf. Ser. 664 (2015) 042025. 12 p.

39

Page 52: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

[37] A. A. Alves, Jr et al., A Roadmap for HEP Software and Computing R&D for the 2020s,arXiv:1712.06982.

[38] N. Himyr et al., BOINC service for volunteer cloud computing, J. Phys. Conf. Ser. 396(2012) 032057.

[39] C. Bozzi, LHCb Computing Resources: 2018 requests and preview of 2019 requests, LHCb-PUB-2017-009.

[40] C. Bozzi, LHCb Computing Resources: 2019 requests and reassessment of 2018 requests,LHCb-PUB-2017-019.

[41] C. Bozzi, LHCb Computing Resources: 2019 and 2020 requests, LHCb-PUB-2018-001.

[42] C. Bozzi, LHCb Computing Resources: 2019/2020 requests and preview of Run 3 computingmodel, LHCb-PUB-2018-011.

40

Page 53: CERN/LHCC 2018-14 LHCb TDR 18 · 2018. 11. 27. · EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) CERN-LHCC-2018-014 LHCB-TDR-018 LHCb-PUB-2018-012 26 November 2018 LHCb Upgrade

ISBN 978-92-9083-499-1

pg

pg