department of computational physics and …icasc2019.ifin.ro/book_of_abstracts.pdf · speaker: dan...

100

Upload: others

Post on 11-Feb-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 2: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 3: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

DEPARTMENT OF COMPUTATIONAL PHYSICS AND INFORMATION TECHNOLOGIES

HORIA HULUBEI NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT IN PHYSICS AND NUCLEAR ENGINEERING

International Conference on Advanced Scientific Computing 12-14 September 2019

Sinaia, Prahova,Romania

BOOK OF ABSTRACTS

Organizers

Romanian Tier-2 Federation

RO-LCG Horia Hulubei National Institute for Physics and

Nuclear Engineering

Sponsors

Ministry of Research and Innovation IEEE Romanian Section

Page 4: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

International Conference on Advanced Scientific Computing

Măgurele, 2019

ISBN 978-973-0-30119-9

DTP: Mara Tănase, Adrian Socolov Cover: Mara Tănase

Page 5: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

Scientific Advisory Board Liviu Ixaru, IFIN-HH, Co-chair

Vladimir Melezhik, JINR, Dubna,Russia, Co-chair

Daniele Cesini, INFN, Bologna, Italy, Co-chair

Gheorghe Adam, IFIN-HH and JINR, Dubna, Russia

Sanda Adam, IFIN-HH and JINR, Dubna, Russia

Emanouil Atanassov, IICT-BAS, Sofia, Bulgaria

Antun Balaz, Institute of Physics Belgrade, Serbia

Ján Buša, Technical University of Košice, Slovakia

Ciprian Dobre, University Politehnica of Bucharest, Romania

Mihnea Dulea, IFIN-HH

Felix Fărcaș, INCDTIM Cluj-Napoca, Romania

Paul Gasner, 'Alexandru Ioan Cuza' University of Iasi, Romania

Boro Jakimovski, Ss. Cyril and Methodius University of Skopje, Rep. of Macedonia

Vladimir V. Korenkov, JINR, Dubna, Russia

Florin Pop, University Politehnica of Bucharest, Romania

Gabriel Popeneciu, INCDTIM, Cluj-Napoca, Romania

Octavian Rusu, 'Alexandru Ioan Cuza' University of Iasi, Romania

Emil Slușanschi, University Politehnica of Bucharest, Romania

Tatiana A. Strizh, JINR, Dubna, Russia

Nicolae Țăpuș, University Politehnica of Bucharest, Romania

Sorin Zgură, ISS, Magurele, Romania

Organizing Committee Mihnea Dulea, IFIN-HH, Chair

Camelia Vișan, IFIN-HH, Scientific Secretary

Mihai Ciubăncan, IFIN-HH

Eduard Csavar, IFIN-HH

Dumitru Dinu, IFIN-HH

Corina Dulea, IFIN-HH

Felix Fărcaș, INCDTIM Cluj-Napoca

Bianca Neagu, IFIN-HH

Adrian Staicu, IFIN-HH

Nicolae Țăpuș, University Politehnica of Bucharest

Page 6: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 7: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 8: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 9: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

9:00

9:30

10:00 Mihnea Dulea IFIN-HH

10:10

10:30

11:00

11:30

11:40

12:10

12:40

13:10

14:10 Octavian Rusu UAIC

14:30 Mihnea Dulea IFIN-HH

14:50 Frédéric Derue LPNHE

15:10 Costin Grigoras CERN

15:30

15:50 Daniel-Florin

Dosaru

UPB

16:10 Daniele Cesini INFN

16:30 Dana Petcu UVT

16:50 Fernando Aguilar IFCA

20:00

10:00 Introduction: Romanian participation in the

WLCG collaboration

Mihnea Dulea IFIN-HH 10:00 User-friendly expressions for the coefficients in

exponential fitting

Liviu Ixaru IFIN-HH

10:05 Multi-VO support at a Tier2 site in the

perspective of the LHC’s Run 3

Mihai Ciubancan IFIN-HH

10:20 Deployment and maintenance of an XRootD

storage cluster for ALICE

Adrian Sevcenco ISS

10:35 EOS deployment at the RO-03-UPB Grid site Mihai Carabas UPB

10:50 Migration of the UAIC Grid site in a new

configuration

Ciprian Pinzaru UAIC

11:05 Contribution of the RO-14-ITIM site to ATLAS

computing

Felix Farcas INCDTIM 11:00 Adapted two step peer methods for advection

diffusion problems

Dajana Conte Univ. of

Salerno

11:20

11:20 A tribute to Liviu Ixaru, developer of successful

CP-algorithms

Marnix Van

Daele

Ghent

Univ.

11:40 Solving Quasi-exactly solvable differential

equation by a canonical polynomials approach

Mohamad Khalil

El Daou

CTS

Kuwait

12:00 Finite Element Method and Programs for

Investigation of Quantum Few-Body Systems

Sergue Vinitsky JINR

12:20 Analyzing Data generated by the High Power

Laser System in ELI-NP

Georgios

Kolliopoulos

ELI-NP /

IFIN-HH

12:20 Algorithms for Generating in Analytical Form

Interpolation Hermite Polynomials in

Hypercube

Alexander Gusev JINR

12:40 Cluster monitoring system of the

Multifunctional Information and Computing

Complex (MICC) LIT

Ivan Kashunin JINR 12:40 PySlise: a Python Package for solving

Schrödinger Equations

Toon Baeyens Ghent

Univ.

13:00

OPENING SESSION (10:00-10:30)

ICASC 2019: CONFERENCE PROGRAM

NUMERICAL METHODS FOR PHYSICS (10:00-16:40)RO-LCG REPORTS (10:00-11:20)

SPONSORS SESSION (10:30-11:30)

IT INFRASTRUCTURES FOR RESEARCH (11:40-14:50)

12.09.2019 (10:00-17:10)

REGISTRATION (60')

WELCOME COFFEE (30')

Foreword: Continuing the conference series titled "Grid, Cloud and High-Performance Computing in Science" (2006-2018)

Make Innovation real with DELL Technologies - AI and High Performance Computing solutions

Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania

Accelerating HPC and AI applications with In-Network Computing

Speaker: GIL BLOCH, Mellanox Principal Architect, MellanoxTechnologies

Romanian NREN – support for Romanian Research facilities

Monitoring of exascale data processing

CONFERENCE DINNER

Data Management services in highly distributed environments: recent achievements of the eXtreme-DataCloud project

Jupyter Notebooks as scientific gateways to access cloud computing and distributed storage

10:20

Problem-dependent discretizations of

evolutionary problems

Beatrice

Paternoster

Univ. of

Salerno

10:40Quantum-quasiclassical model for calculation of

resonant processes in hybrid atom-ion traps

Vladimir

Melezhik

JINR

13.09.2019 (10:00-17:00)

LUNCH BREAK (60')

COFFEE BREAK (20')

11:40 JINR;

IFIN-HH

Challenges in Mathematical Modeling and

Computational Physics in LIT-JINR on

2020–2023

Gheorghe Adam

DATA MANAGEMENT, PROCESSING AND MONITORING - II (11:40-13:00)

Real-time conditions data distribution for the online data processing of the ALICE experiment

The CPC Program Library: 50 years of open-source software in computational physics

Speaker: Prof. STAN SCOTT, Queen's University Belfast

BREAK (10')

European Infrastructure for Advanced Computing (TBC)

Speaker: YANNICK LEGRÉ, Managing Director of EGI.eu Foundation

Overview of JINR computing infrastructure

Speaker: Dr. VLADIMIR KORENKOV, Joint Institute for Nuclear Research (JINR)

COMPUTING SUPPORT FOR LHC EXPERIMENTS (14:50-16:00)

LUNCH BREAK (60')

COFEE BREAK (20')

Participation of DFCTI/IFIN-HH in advanced computing for research

ALICE Grid Computing resources utilization report and plans for Run 3

Status and prospects of ATLAS (FR Cloud) computing

TBA

Speaker: EDUARD BODOR, Solution Architect for Data Centers, Schneider Electronics

DATA MANAGEMENT, PROCESSING, AND MONITORING - I (16:10-17:10)

Page 10: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

14:00 A Research Management System based on Rdf

Graphs

Loredana

Mocean

UBB 14:00 Modelling high precision spectroscopy

experiments

Dimitar Bakalov IRNRE

14:20 Development of cloud computing, HTC, and

HPC services at NGI-RO

Ionut Vasile IFIN-HH 14:20 Gibbs Phenomenon in Clenshaw-Curtis

Quadrature

Sanda Adam JINR;

IFIN-HH

14:40 Application of the fragment molecular orbital

method to the investigation of antimicrobial

peptides interaction with membrane models

George Necula IFIN-HH 14:40 Evolution of quantum correlations in Gaussian

bosonic channels

Aurelian Isar IFIN-HH

15:00 Molecular dynamics simulations on the

interaction between a silver nanoparticle and

lipid monolayers

Maria Mernea UB 15:00 The quantum dynamics of a two-qubit pair

longitudinally coupled with a single-mode

boson field

Elena Cecoi Inst. of

Appl.

Phys.

15:20

15:40 Distributed bioinformatics analyses on an SGE

cluster, for variant calling on bovine whole-

genome samples

Alexandru E.

Mizeranschi

RDSB -

Arad

15:40 Uhlmann Fidelity of Two Bosonic Modes in a

Thermal Bath

Marina

Cuzminschi

IFIN-HH;

UB

16:00 Accurate data identification in low signal-to-

noise ratio series

Eduard

Barnoviciu

UTI; UPB

16:20 Representing Character Sequences as Sets. A

simple and intuitive string encoding algorithm

for text data cleaning

Martin Marinov TU Sofia 16:20 Fidelity of teleportation for two mode

Gaussian resource states in a thermal bath

Alexei Zubarev INFLPR;

UB

16:40 Speeding up atomistic DFT simulations by

machine learning methods

Tudor Mitran IFIN-HH

10:00 Nicolay Voytishin JINR

10:20 Mihail Radu

Catalin Trusca

INCDTIM

10:40 Liliana A. Boldea IFIN-HH

11:00

12:00

14.09.2019 (10:00-11:30)

COFFEE BREAK (20')

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

EXCURSION and LUNCH (12:00-16:00)

CLOSING SESSION: Conference review

Improving cooling by airflow control inside the Data Center - model and simulation

SOFTWARE APPLICATIONS (10:00-11:00)Software developments for experimental data processing in NICA projects

Detection and Validation of Asteroids using the NEARBY Software Platform

Inst. of

Appl.

Phys.

COFFEE BREAK (20')

MOLECULAR BIOLOGY AND BIOCOMPUTING (14:40-16:00)

MACHINE LEARNING (16:00-17:00)16:00 Lasing and cooling effects of a quantum

oscillator coupled with a three-level lambda -

type system

Alexandra Mirzac

Page 11: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

CONTENTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The CPC Program Library: 50 years of open-source software in computational physics ...................... 15 N.S. Scott

Overview of JINR computing infrastructure ...................................................................................... 17 Vladimir Korenkov, Tatiana Strizh, Andrey Dolbilov, Dmitry Podgainy, Nikolay Kutovsky, Valery Mitsyn, Nikolay Voytishin, and Gheorghe Adam

Romanian NREN – support for Romanian Research facilities ............................................................. 19 Octavian RUSU, Paul GASNER, Ciprian PINZARIU, Valeriu VRACIU

Participation of DFCTI/IFIN-HH in advanced computing for research ................................................. 21 Mihnea Dulea, Dragos Ciobanu-Zabet, Mihai Ciubancan, and Ionut Vasile

Status and prospects of ATLAS (FR Cloud) computing ....................................................................... 23 Frédéric Derue

ALICE Grid Computing resources utilization report and plans for Run 3 ............................................. 24 Costin Grigoras, Latchezar Betev

Real-time conditions data distribution for the Online data processing of the ALICE experiment ......... 25 Daniel-Florin Dosaru, Nicolae Ţăpuş, Mihai Carabaş, Costin Grigoraş

Multi-VO support at a Tier2 site in the perspective of the LHC’s Run 3 .............................................. 26 Mihai Ciubăncan, Mihnea Dulea

Deployment and maintenance of an XRootD storage cluster for ALICE .............................................. 28 Adrian Sevcenco

EOS deployment at the RO-03-UPB site ............................................................................................ 29 Mihai Carabas, Costin Carabas, Nicolae Tapus

Migrating the UAIC Grid site to a new configuration ......................................................................... 31 Ciprian Pinzaru, Valeriu Vraciu, Paul Gasner, Octavian Rusu

Contribution of the RO-14-ITIM site to ATLAS computing ................................................................. 33 Farcas Felix, Trusca Radu, Nagy Jefte, Albert Stefan

User-friendly expressions for the coefficients in exponential fitting .................................................. 34 L. Gr. Ixaru

Problem-dependent discretizations of evolutionary problems .......................................................... 36 Dajana Conte, Raffaele D’Ambrosio and Beatrice Paternoster

Quantum-quasiclassical model for calculation of resonant processes in hybrid atom-ion traps .......... 38 Vladimir Melezhik

Adapted two step peer methods for advection diffusion problems ................................................... 40 Dajana Conte, Fakhrodin Mohamadi, Leila Moradi, Beatrice Paternoster

A tribute to Liviu Ixaru, developer of successful CP-algorithms ......................................................... 42 Marnix Van Daele, Toon Baeyens

Solving Quasi-exactly solvable differential equation by a canonical polynomials approach ................ 44 Mohamad K. El-Daou

Page 12: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

CONTENTS Finite Element Method and Programs for Investigation of Quantum Few-Body Systems .................... 46 S.I. Vinitsky, G. Chuluunbaatar, O. Chuluunbaatar, A.A. Gusev, R. G. Nazmitdinov, P. W. Wen, L.L. Hai, V.L. Derbov, P.M. Krassovitskiy, and A. Góźdź

Algorithms for Generating in Analytical Form Interpolation Hermite Polynomials in Hypercube ......... 48 A.A. Gusev, G. Chuluunbaatar, O. Chuluunbaatar, S.I. Vinitsky, L.L. Hai, T.T. Lua, V.L. Derbov, P.M. Krassovitskiy, A. Góźdź

PySlise: a Python Package for solving Schrödinger Equations ............................................................. 50 Toon Baeyens, Marnix Van Daele

Modelling high precision spectroscopy experiments ......................................................................... 51 Dimitar Bakalov

Gibbs Phenomenon in Clenshaw-Curtis Quadrature .......................................................................... 53 S. Adam and Gh. Adam

Evolution of quantum correlations in Gaussian bosonic channels ...................................................... 55 Aurelian Isar

The quantum dynamics of a two-qubit pair longitudinally coupled with a single-mode boson field .... 56 Elena Cecoi, Viorel Ciornea, Aurelian Isar, Mihai A. Macovei

Uhlmann Fidelity of Two Bosonic Modes in a Thermal Bath ............................................................... 57 Marina Cuzminschi, Alexei Zubarev, Aurelian Isar

Lasing and cooling effects of a quantum oscillator coupled with a three-level Λ – type system ........... 59 Alexandra Mirzac, Mihai A. Macovei

Fidelity of teleportation for two mode Gaussian resource states in a thermal bath ............................ 61 Alexei Zubarev, Marina Cuzminschi, Aurelian Isar

Data Management services in highly distributed environments: recent achievements of the eXtreme-DataCloud project .............................................................................................................. 63 Daniele Cesini

Monitoring of exascale data processing ............................................................................................ 64 Dana Petcu, Gabriel Iuhasz

Jupyter Notebooks as scientific gateways to access cloud computing and distributed storage ............ 66 Fernando Aguilar

Challenges in Mathematical Modeling and Computational Physics in LIT-JINR on 2020–2023 ............. 68 Gh. Adam, J. Buša, O. Chuluunbaatar, and P. Zrelov

Analyzing Data generated by the High Power Laser System in ELI-NP ................................................. 70 Georgios Kolliopoulos, Bertrand de Boisdeffre

Cluster monitoring system of the Multifunctional Information and Computing Complex (MICC) LIT ... 72 I. Kashunin, A. Dolbilov, A. Golunov, V. Korenkov, V. Mitsyn, and T. Strizh

A Research Management System based on Rdf Graphs ..................................................................... 74 Loredana Mocean, Miranda-Petronella Vlad

Development of cloud computing, HTC, and HPC services at NGI-RO ................................................. 76 Ionut Vasile, Dragos Ciobanu-Zabet, and Mihnea Dulea

Page 13: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

CONTENTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Application of the FMO method to investigation of antimicrobial peptides interaction with membrane models................................................................................................................... 78 George Necula, Mihaela Bacalum, Lorant Janosi, Mihai Radu

Molecular dynamics simulations on the interaction between a silver nanoparticle and lipid monolayers ....................................................................................................................... 80 Maria Mernea, Octavian Calborean, Speranta Avram, Ionut Vasile, Dan Florin Mihailescu

Distributed bioinformatics analyses on an SGE cluster, for variant calling on bovine whole-genome samples ................................................................................................................... 82 Alexandru Eugeniu Mizeranschi, Ciprian Valentin Mihali, Radu Ionel Neamț, Mihai Carabaș, Daniela Elena Ilie

Accurate data identification in low signal-to-noise ratio series ......................................................... 84 Barnoviciu Eduard, Carata Serban, Ghenescu Veta, Ghenescu Marian, Mihaescu Roxana, Chindea Mihai

Representing Character Sequences as Sets A simple and intuitive string encoding algorithm for text data cleaning................................................................................................................................... 86 Martin Marinov, Alexander Efremov

Speeding up atomistic DFT simulations by machine learning methods .............................................. 88 Tudor Luca Mitran, George Alexandru Nemneș

Software developments for experimental data processing in NICA projects ...................................... 90 Mikhail Kapishin, Vasilisa Lenivenko, Vladimir Palichik, Valery Panin, Nikolay Voytishin

Improving cooling by airflow control inside the Data Center - model and simulation ......................... 92 Mihail-Radu-Cătălin Trușcă, Jefte Nagy, Ştefan Albert and Felix Fărcaș

Detection and Validation of Asteroids using the NEARBY Software Platform ..................................... 93 Afrodita Liliana Boldea, Ovidiu Vaduvescu, Costin Radu Boldea

Page 14: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 15: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

15

SEPTEMBER 12-14, 2019

OPENING SESSION

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The CPC Program Library: 50 years of open-source software in computational physics

N.S. Scott1 1 Editor-in-Chief and Program Library Director, Computer Physics Communications,

Engineering and Physical Sciences, Queen’s University Belfast, Belfast, BT7 1NN

The following announcement in the journal Nature on the 15th November, 1969 heralded the birth of the Computer Physics Communications Program Library (CPCPL) [1],

“An international library of computer programs in physics has been established at the Queen’s University, Belfast, with the help of a grant from the Science Research Council. The idea is that the library will acquire and store computer programs, supplying a copy of each to regular subscribers or copies of particular programs to individual scientists.”

The Library was in essence a research project, not a commercial venture, and the U.K. Science Research Council gave it financial support in the early stages until it could become self-supporting but non-profit making. Detailed descriptions of the programs were to be published in the North-Holland journal Computer Physics Communications (CPC) [2] which was launched in July 1969.

In 1994, twenty-five years later, paying tribute to the success of the venture, W.H. Wimmers, former President, North-Holland [1] admitted that “the running of the Library was too difficult for North-Holland from both technical and financial viewpoints .. technical handling .. required expertise that was simply not available to us … Financially, we were entirely in the dark, having not the slightest idea about subscription potential .. A Journal which would publish write-ups of the papers and the listings ..was something we understood and could handle .. So the project started in 1969 with separate management budgets for the library and journal.”.

Wimmers went on to note, “.. Drastic changes in hardware and software have taken place in the past 25 years, and it would be interesting see a prognosis of developments towards the 21st century … May the project do well for another 25 years, and beyond”.

We are now in the 21st century and another 25 years will be marked during ICASC 2019. It is apposite and timely, therefore, to address Wimmers’ comments and to trace the development of Library over the past 50 years, to assess its contribution to the computational physics community and to note its intended direction of travel. Some of these topics are introduced below and will expanded upon at the conference.

“Drastic changes in hardware and software” have indeed taken place during the lifetime of the Library. In the early days institutional subscribers received four magnetic tapes per year each containing approximately 56,000 card images for an annual subscription of £100. Individual requests were supplied on mini tape or program decks of punched cards in BCD, EBCDIC and other card codes. The magnetic tapes were written in a mode specifying tape width, number of tracks, density of characters per inch, inter-block gap and the character code most suitable for their institution, which involved translating from the author’s 64-character code to that of the

Page 16: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

16

SEPTEMBER 12-14, 2019

OPENING SESSION

ICASC 2019, Sinaia, Romania, 12-14 September 2019

subscriber. Tapes required two days to prepare including one day cooling. There was no guarantee that tapes would arrive at their destination undamaged, “.. parcels could easily sit for hours at airports, possibly in the sun, leading to deterioration ..”[3]. Programs were expected to be written in ANSI Fortran or Algol 60; a restriction that was relaxed in 1974.

Today, 3,300+ major computational physics programs, contributed by scientists worldwide, written in over 60 programming languages, covering 23 broad areas of physics, are seamlessly and instantly delivered, 24/7, free of charge to anyone with an internet connection, from a server at Queen’s University Belfast. The repository will move towards the end of 2019 when the Mendeley Data repository [4] will become the new home for the entire CPC Program Library.

Indeed, since 2016 all new computer programs belonging to Computer Programs in Physics (CPiP) papers published in CPC have been lodged with Mendeley Data [4]. Each is linked to the corresponding CPC article on ScienceDirect and each can be downloaded free of charge. Mendeley Data collaborates with Data Archive and Networked Services (DANS) [5], that promotes sustained access to digital research data and encourages researchers to archive and reuse data. Each CPC program is sent to the DANS-archive for long-term storage with appropriate CC0 metadata and a resolvable, persistent and versioned DOI.

Reproducibility and re-usability of code are very important to CPC and, to facilitate this process, CPC and other Elsevier journals are currently running a trial in partnership with Code Ocean [6] to enable authors to share fully-functional and executable code accompanying their articles. Code Ocean is a cloud-based reproducibility platform where authors upload code and data and configure the necessary computational environment for reproduction. The code, data, metadata, and computational environment, termed a 'compute capsule', can then be examined and executed by readers via a link from the article. Code Ocean supports all open source programming languages, as well as Stata and MATLAB and compute capsules can be created from existing GitHub folders by drag and drop.

CPC and CPCPL have undoubtedly been a resounding success for 50 years and I should like to pay tribute the following: to Phil Burke (deceased 4 June 2019), whose devised and led the project; to Val Burke and Shirley Jackson who were the technical experts behind the design and implementation of Library system; to Shirley Jackson (Program Librarian), Carol Phillips (Program Librarian) and John Ballantyne (Technical Editor) whose painstaking support for contributors and subscribers has enhanced CPCPL’s reputation across the scientific community; and to the staff at North-Holland and Elsevier for their continued support throughout.

References

[1] CPC Program Library, http://cpc.cs.qub.ac.uk

[2] Computer Physics Communications Journal, https://www.journals.elsevier.com/computer-physics-communications

[3] W.H. Wimmers, Comp. Phys. Commum., 84, (1994) x

[4] Mendeley Data, https://data.mendeley.com/datasets/journals/00104655

[5] Data Archive and Networked Services, https://dans.knaw.nl/en

[6] Code Ocean, https://codeocean.com

Page 17: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

17

SEPTEMBER 12-14, 2019

IT INFRASTRUCTURES FOR RESEARCH

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Overview of JINR computing infrastructure

Vladimir Korenkov1, Tatiana Strizh1, Andrey Dolbilov1, Dmitry Podgainy1, Nikolay Kutovsky1, Valery Mitsyn1, Nikolay Voytishin1, and Gheorghe Adam1,2

1 Laboratory of Information Technologies, Joint Institute for Nuclear Research, 6 Joliot-Curie St, Dubna, Moscow Region, Russia 141980

2 Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH), 30 Reactorului, 077125 Măgurele - Bucharest, Romania

One of the main tasks of the Laboratory of Information Technologies in JINR is to develop the network, information and computing infrastructure of JINR for the research and production activities of the Institute and its Member States on the basis of state-of-the-art information technologies. In order to fulfil this task, an articulated Multifunctional Information and Computing Complex (MICC) is under permanent development. For the time being, the MICC comprises almost the entire JINR computing infrastructure.

The MICC meets the requirements for a modern highly performant scientific computing complex: multi-functionality, high performance, task adapted data storage, high reliability and availability, information security, scalability, customized software environment for different existing user groups, high-performance telecommunications and modern local area network. To reach these goals, major new additions are made, the existing facilities are upgraded according to priorities established in terms of the most urgent JINR needs.

During the last time, the focus was on the development and modernization of the JINR telecommunication and network infrastructure including the modernization of the JINR local area network with the aim to provide data storage and processing resources for the NICA project as well as on the modernization of the MICC engineering infrastructure including uninterruptible power supply systems, the conditioning and ventilation system. Special attention was paid to the creation of the NICA project IT-infrastructure including both a long-term storage system for experimental data (BM@N, MPD, SPD) and a reliable and effective system for off-line data processing.

The developed exhaustive MICC monitoring system allows getting information from the different components of the computing complex: the engineering infrastructure, network, computing nodes, task launching systems, data storage elements, grid services, etc. The entire enumerated above guarantee a high level of the MICC reliability.

The performance and data storage systems of the MICC basic grid component, the JINR CMS Tier-1 site, were extended as planned, which ensured a steady second place in the world for the JINR Tier-1 among other CMS Tier-1 sites by the number of processed events.

Page 18: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

18

SEPTEMBER 12-14, 2019

IT INFRASTRUCTURES FOR RESEARCH

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The JINR Tier-2 level site was actively developed. It provides the processing of data from four LHC experiments (ALICE, ATLAS, CMS, LHCb) as well as from a whole range of non-LHC virtual organizations (VOs) (BESIII, BIOMED, СOMPASS, MPD, NOvA, STAR, ILC). The MICC also provides computing power for computations done outside the grid environment. This is essential for experiments such as NOvA, PANDA, BESIII, NICA/MPD/BM@N and others, as well as for local users from all the JINR Laboratories.

Another part of the MICC is the JINR cloud infrastructure. Work on the integration of cloud structures of the JINR Member States was carried out as well as the maintenance of common IT-services and computing infrastructures. Training courses were held, the deployment and support of additional IT-services were ensured upon request in frames of the JINR neutrino program. The NOvA experiment, which was the first neutrino experiment to actively use the LIT cloud, now has the largest number of allocated resources, the Baikal-GVD and JUNO experiments follow. With support from the DLNP neutrino program, the number of computing cores in the cloud infrastructure was doubled.

Another fast-evolving part of MICC is the HybriLIT heterogeneous computing platform consisting of an education and testing polygon and the ``Govorun'' supercomputer, both sharing a unified software and information environment. The ``Govorun'' supercomputer is designed to carry out resource-intensive and massively parallel computations for the solution of a wide range of challenges facing JINR, which becomes possible due to the heterogeneity (presence of different types of computing accelerators) of the supercomputer hardware architecture. The education and testing polygon is aimed at exploring the possibilities of novel computing paradigms and IT-solutions, at conducting training courses on parallel programming techniques, at providing modern tools for the development, debugging and profiling of parallel applications and application packages. The HybriLIT platform users are able to develop and debug their applications on the education and testing polygon and then to carry out calculations on the supercomputer, a short cut to effectively use the supercomputer resources.

To enhance the possibilities for developing mathematical models and algorithms and carrying out resource-intensive computations, including graphics accelerators which significantly reduce the computing time, an ecosystem for tasks of ML/DL and data analysis has been created and is actively developed for the users of the HybriLIT platform.

The MICC project has proved to be a successful aggregation of all the computing and infrastructure resources of LIT. It provides a reliable and well-built computing environment for scientists from JINR and its Member States to carry out their research. The presence of such top-level computing facilities as the “Govorun” supercomputer and the CMS Tier-1 center contribute to significant increase of the JINR visibility worldwide.

Page 19: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

19

SEPTEMBER 12-14, 2019

IT INFRASTRUCTURES FOR RESEARCH

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Romanian NREN – support for Romanian Research facilities

Octavian RUSU1,2, Paul GASNER1,2, Ciprian PINZARIU2, Valeriu VRACIU1,2 1 Agency ARNIEC/RoEduNet, Mendeleev 21-25, Bucharest Romania 2 ”Alexandru Ioan Cuza” University of Iasi, Carol I, 1, Iasi, Romania

Romanian National Research and Education Network (NREN) is supporting the national research activities by providing the necessary resources, in terms on data transport capacities and associated services, at the national, European and international level. The Romanian NREN, known as RoEduNet, is owned and operated by the Agency ARNIEC. At the national level, RoEduNet operates a Level 3 network based on a DWDM technology. The backbone consists of multiple 100 Gbps links between the main network nodes and multiple 10 Gbps links across the country to aggregate the traffic from all research and education institutions. In the European landscape RoEduNet is a member of the GÉANT network connected using multiple 10 Gbps links to the European academic and research community. Other connections are operated to access the Internet, also there are multiple links to various Internet traffic exchanges. These connections are used to optimize the data communications to reach different destinations on the Internet.

The GÉANT network provides connectivity between all European NRENs linking most of the research and education communities around the world through a dedicated data network. Also, specific services are provided to achieve required performances for different distributed applications: from high throughput for CERN to low latency for radio astronomy. These services are available for the Romanian research and education community through the Romanian NREN but limited by the connectivity bandwidth to the GÉANT network. This limit is a result of the evolution of the GÉANT network in Eastern Europe based on leased circuits – the eastern GÉANT POP connected to the dark fiber (DF) cloud, the technology able to provide easily upgradable connectivity, being Budapest. It should be mentioned that the GÉANT team developed a regional study concerning Eastern Europe, but the inclusion of the Bucharest POP in the DF cloud is not the main option.

The demands of academic and research community for bandwidth to GEANT are constantly increasing, yet the major factor leading to the need for high speed connectivity is the existing and developing of research infrastructures in Romania. In the last years, the CERN collaboration encouraged RoEduNet to increase the capacity of GÉANT connectivity. In addition, specific link for the main research facilities located in Magurele was deployed and a new RoEduNet POP providing 100 Gbps based on DWDM technology was installed.

Page 20: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

20

SEPTEMBER 12-14, 2019

IT INFRASTRUCTURES FOR RESEARCH

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Future plans should consider two more ESFRI facilities are developing and will require new networking capacities: ELI-NP and DANUBIUS-RI. For ELI-NP connectivity, the Magurele POP is already deployed and could provide the requested services. In terms of data networking needs, DANUBIUS-RI will build two facilities in Romania: the Hub and the Data Centre. Since the Hub facility is planned be located in Murighiol, RoEduNet already installed one 100 Gbps link in Tulcea, the nearest POP. The Data Centre will provide all e-services for the distributed research Infrastructure for all European components. Providing reliable high-speed connectivity for DANUBIUS-RI Data Centre is a priority for RoEduNet considering all available options. The data transport capacities should be easily upgraded when necessary, so the manner of achieving this goal must be simple without complicated bureaucratic procedures. In the light of the experience gained by operating for over 10 years the national DWDM network, this technology could fulfill these requirements.

On another hand, in order to meet the above requests, the external RoEduNet connectivity has to be flexible enough and not limited to existing links to GÉANT POP in Bucharest, provided through leased circuits. Romanian NREN must be able to respond quickly to future demands, so RoEduNet acted proactive and initiated a project to link the Romanian NREN directly to the GÉANT DF cloud – RoEduNet4. The main goal of this project is to extend the national DWDM network to reach a GÉANT POP using optic fiber. To achieve it, a new dark fiber has to be leased and the transport equipment should be installed along its path. By using this approach, multiple transmission channels with various capacities could be installed, on request, to reach the backbone of the GÉANT network.

RoEduNet – the Romanian NREN – aims to accomplish the needs of research and academic community in the European context in terms of network connectivity and the services associated to data communication. Beyond the “traditional” demands of the community, new requests have raised due to the developments of distributed research infrastructures in Romania which bring important facilities connected at high performance network. The internal links as well as the external ones of RoEduNet have to follow these requests and the projects running will accomplish this task.

Page 21: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

21

SEPTEMBER 12-14, 2019

IT INFRASTRUCTURES FOR RESEARCH

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Participation of DFCTI/IFIN-HH in advanced computing for research

Mihnea Dulea1, Dragos Ciobanu-Zabet1, Mihai Ciubancan1, and Ionut Vasile1 1 Department of Computational Physics and Information Technology (DFCTI)

Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH) Magurele, Ilfov, Romania

In addition to using numerical methods for conducting inter- and multidisciplinary studies in areas of current research interest, DFCTI performs the functions of manager of IFIN-HH’s IT infrastructure, provider of advanced IT resources and services for research, as well as coordinator of the national contribution to the WLCG and EGI collaborations.

This communication shortly reviews the status and the recent evolutions regarding the participation of DFCTI and IFIN-HH in the development and exploitation of the national infrastructure for scientific computing.

Currently, IFIN-HH manages the largest distributed infrastructure for advanced computing in the national system of public research, which offers scientists both high throughput computing (HTC) and high-performance computing (HPC) solutions, as well as Cloud computing services (provided by the CLOUDIFIN center).

The WLCG site hosted at DFCTI, together with three other HTC sites that are managed by the particle physics and hadron physics groups of IFIN-HH, respectively, provide together 75% of the processing power and more than 85% of the storage capacity with which the Romanian Tier-2 Federation (RO-LCG) contributes to the international support of the ALICE, ATLAS and LHCb experiments. The communication will present the strategy adopted for the increase of the resource capacity, and the structural measures implemented by DFCTI in order to satisfy the requirements of the third run of the LHC, together with the prospects of providing computational resources for the future High-Luminosity LHC (HL-LHC).

The second HTC site hosted at DFCTI, GRIDIFIN, ensures the intensive sequential computing support necessary for implementing the priority directions of the IFIN-HH Strategy (such as nuclear and astrophysical physics, nanophysics, biology, multi- and interdisciplinary research) and also provides user access to the parallel computing resources, both in CPU and GPGPU technologies, for satisfying intensive computing requirements at ELI-NP (such as particle-in-cell simulations), or in the fields of nanostructures, computational biology and bioinformatics, and radiopharmaceutical research for nuclear medicine.

Cloud technology has been implemented relatively recently to support multiple computing projects that generate intermittent data flows, in areas such as the physics of the interaction of the laser radiation with nuclear matter, condensed matter physics and biophysics. Currently DFCTI aims to diversify the Cloud services offered to the international research community through EGI Federated Cloud, and to contribute as resources and services provider to the development of the European Open Science Cloud (EOSC).

Page 22: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

22

SEPTEMBER 12-14, 2019

IT INFRASTRUCTURES FOR RESEARCH

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Acknowledgements: This work was partly funded by the Ministry of Research and Innovation under the contracts PN 19 06 02 05 (program NUCLEU), 71 (Romanian-JINR cooperation project, Order 397/27.05.2019), and no. 6 / 2016 (program PNIII-5.2-CERN-RO).

Page 23: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

23

SEPTEMBER 12-14, 2019

COMPUTING SUPPORT FOR LHC EXPERIMENTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Status and prospects of ATLAS (FR Cloud) computing

Frédéric Derue1 1 Laboratoire de physique nucléaire et de hautes énergies (LPNHE-CNRS/IN2P3)

Jussieu, Paris, France

Keywords: ATLAS; Distributed computing; Computing models; HL-LHC

The ATLAS experiment successfully commissioned a software and computing infrastructure to support the physics program during LHC Run 2. The next phases of the accelerator upgrade will present new challenges in the offline area.

In particular, at High Luminosity LHC the data taking conditions will be very demanding in terms of computing resources: between 5 and 10 KHz of event rate from the HLT to be reconstructed (and possibly further reprocessed) with an average pile-up of up to 200 events per collision and an equivalent number of simulated samples to be produced. The same parameters for the current run are lower by up to an order of magnitude.

While processing and storage resources would need to scale accordingly, the funding situation allows one at best to consider a flat budget over the next few years for offline computing needs.

In this presentation we present a status of the current usage of ATLAS computing and storage resources, the expected challenge for the HL-LHC phase and present ideas about the possible evolution of the ATLAS computing model, the distributed computing tools, and the offline software to cope with such a challenge.

The particular case of the ATLAS FR-Cloud which includes the WLCG sites in Romania will be discussed.

Page 24: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

24

SEPTEMBER 12-14, 2019

COMPUTING SUPPORT FOR LHC EXPERIMENTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

ALICE Grid Computing resources utilization report and plans for Run 3

Costin Grigoras1, Latchezar Betev1 1 CERN, Geneva, Switzerland

The ALICE experiment at the CERN LHC has successfully completed LHC Run2 in 2018. In its 10 years of data taking and continuous Grid operations the experiment has collected 39PB of physics data and has produced twice that volume in MonteCarlo simulation and analysis results.

The infrastructure to collect, store, transfer and process the collected and generated information has evolved continuously from the start of the data taking to adapt to the ever increasing demands of the experiment and the ALICE physics community. An even greater evolution is necessary for the LHC Run3, when ALICE moves from triggered data taking to a continuous readout mode, increasing by two orders of magnitude the acquisition rates and subsequently the amount of collected data. The Run3 computing model is also changing to accommodate the new Online-Offline (O2) prompt reconstruction facility, directly connected to the detector readout.

A complete rewrite of the experiment software stack is under way to leverage better the many-core CPUs and will include GPU-aware simulation and reconstruction code. The new data analysis paradigms are based on a cutting-edge message passing framework, allowing to run many analysis tasks in parallel and thus optimize the IO requirements of the Grid jobs.

In this paper we will give an accounting of how the distributed resources have been used in the past, what is the Romanian computing centers contribution to the data warehousing and processing effort and an outlook for the resource requirements for the Run3 of the ALICE experiment.

Page 25: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

25

SEPTEMBER 12-14, 2019

COMPUTING SUPPORT FOR LHC EXPERIMENTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Real-time conditions data distribution for the Online data processing of the ALICE experiment

Daniel-Florin Dosaru1, Nicolae Ţăpuş1, Mihai Carabaş1, Costin Grigoraş2 1 University POLITEHNICA of Bucharest

2 CERN, Geneva, Switzerland

ALICE (A Large Ion Collider Experiment) is a heavy-ion detector on the Large Hadron Collider (LHC) ring. It is designed to study the physics of strongly interacting matter at extreme energy densities, where a phase of matter called quark-gluon plasma forms.

The new ALICE synchronous data reconstruction facility for Run 3 needs a real-time conditions and calibration data distribution mechanism. New calibration and conditions data objects are produced at up to 50Hz and have to be propagated to about 2000 servers as to implement a feedback loop for the online data reconstruction.

For efficient data distribution in this environment the designed solution uses a network multicast delivery mechanism. In addition, the system relies on caching services distributed on all of 2000 servers to receive and keep the most recent object versions in memory, making them available to the localhost running processes via a REST API. The REST API is implemented also by the central object repository and development instances, allowing for transparent connection fallback and access to the conditions data from all jobs running on the Grid infrastructure.

In this paper we show the details of the new experiment conditions data framework and how we managed to have a reliable delivery system based on inherently unreliable transport mechanisms.

Page 26: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

26

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Multi-VO support at a Tier2 site in the perspective of the LHC’s Run 3

Mihai Ciubăncan1, Mihnea Dulea1 1 Department of Computational Physics and Information Technology (DFCTI)

Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH) Magurele, Ilfov, Romania

The estimated rate of raw data taking during LHC’s third run will be significantly higher than in Run 2, due to the increased integrated luminosity and trigger improvements. This will lead to an overall increase in the amount of derived data, which will demand for larger storage and processing capacities for offline computing, together with software upgrades and data handling optimization. In particular, in preparation of the Run 3, the Tier 2 centres have to adapt their infrastructure to the new requirements.

This communication reports on the solutions recently adopted at the DFCTI’s Tier 2 site, RO-07-NIPNE, regarding resource upgrade and improving the computing support for ALICE, ATLAS, and LHCb user communities.

With 5 independent Compute Elements (CEs) [1], of which some were running with deprecated software, the site had a too complex structure to be efficiently managed, therefore measures were taken to aggragate services on fewer, upgraded CEs.

First, all the computing resources of the site had to be migrated from Scientific Linux 6 (SL6) to CentOS7 operating system, and the CREAM-CEs be decommissioned due to the end of support for this service.

The migration of the computing resources dedicated to ATLAS and ALICE experiments from SL6 to CentOS7 was completed. The old CREAM-CEs and an old ARC-CE+ SLURM with SL6 resources dedicated to ATLAS and ALICE have been decommissioned. Also, the approval of the LHCb Computing is awaited in order to decommission the last CREAM-CE dedicated to LHCb and to migrate its worker nodes (WNs) to CentOS7.

Currently, beside the above mentioned CREAM-CE, two ARC-CEs with CentOS7 are in production. One of them, version 5, uses HTCondor as Local Resource Management System (LRMS), and it has more than 1000 slots dedicated to ATLAS, for analysis jobs and production jobs, single core jobs and multicore jobs; the slots are allocated dynamically according to the job requirements (1 slot/core or 8 slots/cores).

Multiple queues were deployed on the other ARC-CE, version 6, with HTCondor LRMS, one queue for each Virtual Organization (VO) supported (ALICE, ATLAS, LHCb, ops), and more than 2100 slots as computing resources.

Page 27: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

27

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

In order to use the resources as efficiently as possible, the nodes are allocated dynamically and fair-share policies have been implemented not just per VOs but also per types of jobs (e.g., analysis, single core production/simulations, multicore production/simulations, etc.). While for ATLAS VO the only file that had to be modified for ARC-CE + HTCondor was the configuration file, for ALICE and LHCb VOs it was necessary to also edit the submit-condor-job script used by ARC-CE to launch the jobs to HTCondor LRMS and to publish in System Information the status of the jobs. This information is used by ALICE and LHCb to monitor the status of their jobs.

Because the ALICE jobs are opening up to 20 files per job, the I/O rates are very high and create traffic congestion when a network file system such as NFS is used. This is translated in a very high rate of job failure when more than 50 jobs are running concurrently. To avoid the traffic congestion, ARC-CE had to be configured so that the input files of the jobs are copied on the local disk of the WNs. This has dropped the failure rate of ALICE jobs close to 0.

An example is included to illustrate the difference of I/O rates for ALICE jobs processing files from the WN’s local disk and jobs accessing files through NFS. It is shown that in the local disk case the I/O rate is 3-4%, while in the NFS case the I/O rate can increase over 90% .

Acknowledgements: This work was partly funded by the Ministry of Research and Innovation under the contracts no. 6 / 2016 (program PNIII-5.2-CERN-RO) and PN 19 06 02 05 (program NUCLEU).

References

[1] M. Ciubăncan, M. Dulea, Optimization of the job management in a multi-queue environment, RO-LCG 2018 “Grid, Cloud, and High-Performance Computing in Science”, Cluj-Napoca, 17-19 October 2018.

Page 28: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

28

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Deployment and maintenance of an XRootD storage cluster for ALICE

Adrian Sevcenco1 1 Institute of Space Science, The High Energy Astrophysics and Advanced Technologies Laboratory,

Atomistilor 409, Magurele, Ilfov, Romania 077125

The ALICE experiment, as all other LHC based experiments, use the well know XRootD solution for clustered storage.

With 72 PB of data present in the storage elements and with predictions for a steep increase of the requirements for Run3, it is increasingly important to have a good overview of the infrastructure that is used, from the software part of the XRootD to the kernel knobs and low level layers of network and block devices. While there are software options (also based on XRootD like EOS) that have a number of advantages over the plain XRootD installation, the requirements are potentially prohibitive, so small Tier 2 sites could evaluate the plain XRootD implementation as more FTE and cost efficient.

In this paper we will give an overview of the procedures of putting online an XRootD cluster to be used by ALICE storage infrastructure and the associated optimizations for both XRootD and the low level layers of the kernel sub-systems. Moreover we will explore from the user point of view the options for interacting with the GRID level storage and present a few useful recipes.

Acknowledgements: This work was partly funded by the Ministry of Research and Innovation under the contract no. 6 / 2016 (program PNIII-5.2-CERN-RO).

Page 29: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

29

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

EOS deployment at the RO-03-UPB site

Mihai Carabas1, Costin Carabas1, Nicolae Tapus1 1 University POLITEHNICA of Bucharest

The grid site RO-03-UPB is currently contributing to EGI [1] and the WLCG collaboration [2] through the ALICE project [3], with more than 500 CPU cores. The site is hosted in three datacentres distributed within the UPB campus.

Throughout the infrastructure, a Hyper-V Cluster runs all the grid management services (compute element, storage element, management of data nodes, etc). This virtualization cluster ensures the live migration of the services, which contributes to reaching a high SLA. Virtualization technology also provides the virtual machine replication accross the three datacenters, for disaster recovery purposes. The worker nodes are running on top of Openstack cloud software, with no performance issues (jobs are running with no cost of performance in terms of CPU). Using Openstack allows us to scale-up/down resources, thus providing elasticity.

Till recently, given the rather small storage capacity of the site, the access to the distributed storage was provided through the XRootD framework but, following the recommendations of the ALICE collaboration for globally adopting newer software better adapted to an increased storage space, we decided to install and configure the EOS distributed filesystem.

Our EOS setup includes one Management and Meta Data Server (MGM) and three File Storage Servers (FST):

eos-mgm.grid.pub.ro (IPv4 and IPv6) - Hyper-V VM

eos-fst{1-3}.grid.pub.ro (IPv4 and IPv6) - Each with 2 mounts of 11 TB [each volume of 11TB is a RAID 5 (old hardware) with 1 spare]

As the deployment by running the eos-deploy script published on the ALICE website proved to be an impossible task, during a timespan of 3 months we have performed incremental fixes to the script, together with the EOS and ALICE communities. In the end we had a functional script which was also updated on the ALICE website [4].

This communication will briefly present the solutions we found to solve multiple problems that appeared in the process of installing EOS for the ALICE project, such as: specificities of ‚Aquamarine’ vs. ‚Citrine’ versions; enforcement and documentation of Xrootd 4.7 as dependency; incorrect invocation commands when switching to systemd on EL7 OS; errors in reading the public key from TkAuthz.Authorization file; some file copying failures without log notifications; failing 3rd party file transfers.

Page 30: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

30

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

After solving all the errors above, we managed to have a functional deployment of EOS as can be seen in Fig. 1 below.

Figure 1. FSTs at RO-03-UPB

The availability for writing of the RO-03-UPB’s storage is presented in Fig. 2. While the downtimes associated with the problems presented above are clearly seen in the first half of the displayed period, the availability of the storage element became close to 100% after solving these issues.

Figure 2. FSTs availability at RO-03-UPB

In conclusion, we report a successful EOS deployment at the RO-03-UPB. In cooperation with the EOS and ALICE computing communities, we have solved a list of controversial issues that had arised and needed to be taken into account while deploying EOS in the ALICE environment.

Acknowledgements: This work was partly funded by the Ministry of Research and Innovation under the contracts no. 6 / 2016 (program PNIII-5.2-CERN-RO) and no. 13 / 17.10.2017.

References:

[1] https://www.egi.eu/

[2] http://wlcg.web.cern.ch/

[3] http://alimonitor.cern.ch

[4] https://alien.web.cern.ch/content/documentation/howto/site/eosinstallation

Page 31: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

31

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Migrating the UAIC Grid site to a new configuration

Ciprian Pinzaru1, Valeriu Vraciu2, Paul Gasner2, Octavian Rusu2 1 Digital Communication Department, ‘Alexandru Ioan Cuza’ University of Iasi,

2 Romanian Education Network [email protected]

Introduction

“Alexandru Ioan Cuza” University of Iasi contributes to the Worldwide LHC Computing Grid (WLCG) with the site RO-16-UAIC, that supports the ATLAS virtual organization (VO) since 2010. In this report the main results obtained at the site and the last year developments are presented.

Resources

The site has a heterogeneous structure, the worker nodes (WN) being provided by a legacy cluster of 1U, 8 CPU core servers, together with a newer blade system whose processing capacity will increase by 40% this year.

According to the REBUS resource monitoring, the site currently supports the ATLAS VO with 396 logical CPUs, meaning 4,100 HEPSPEC06 units, which are used in Monte Carlo (MC) simulations. As RO-16-UAIC was one of the first ATLAS French Cloud sites to become storage-less, the DPM disk support for the MC jobs is remotely provided by RO-07-NIPNE.

The main services (compute element (CE) and resource management, perfSONAR, BDII, UI, Squid, DHCP, DNS) run on virtual machines which are installed in back-up configuration and are hosted by two servers with 12 CPU cores, 32 GB RAM and two 10-Gigabit Ethernet interfaces.

The switch that connects the blade servers used as WNs also provides redundancy to the central router. The servers used for host virtualization are connected to the grid switch through 10-Gigabit Ethernet links, which significantly improves network performance of the site.

Recent developments

Taking into account the necessity for overall implementation of new technologies and technological updates within the grid centres, ATLAS Computing management has requested the upgrade of the sites from software with end-of-life support to newer versions. Thus, deadlines have been set for moving the worker nodes to CentOS7 operating system [2] and renouncing to the CREAM service, which will be supported only until the end of the year [3].

In accordance with ATLAS Computing requirements, the main activity at the site was the migration of the servers from the deprecated SL6 to CentOS7 OSs, and the migration of the compute element from CREAM to ARC-CE with SLURM resource management system. The CREAM service ce-grid.grid.uaic.ro was decommissioned according to the EGI procedure [4].

Page 32: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

32

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

At present, the migration operations are completed and two new batch queues are registered in the PanDA monitoring system, one for single CPU core simulation (RO-16-UAIC_ARC_CL7), and the other for multicore simulation (RO-14-UAIC_ARC_MCORE_CL7).

Within the overall WLCG campaign for enabling the IPv6 routing protocol on the perfSONARs and storage elements of the sites [5], IPv6 was implemented in dual-stack configuration on the perfSONAR of RO-16-UAIC.

To monitor the environmental control within the server room and the overall condition of the HVAC support we implemented an automation system based on Raspberry Pi, whose mode of operation will also be presented at the conference.

Figure 1. Monitoring the data centre environment using the Raspberry Pi solution

Acknowledgement: This work was partly funded by the Ministry of Research and Innovation under the contract no. 6 / 2016 (program PNIII-5.2-CERN-RO).

References

[1] WLCG REsource, Balance & Usage, [Online]. Available at https://wlcg-rebus.cern.ch/apps/topology/federation/252/ [Accessed June 2019]

[2] Atlas Computing, [Online]. Available at https://twiki.cern.ch/twiki/bin/view/AtlasComputing/CentOS7Readiness [Accessed 01 06 2019].

[3] EGI Portal, [Online]. Available at https://operations-portal.egi.eu/broadcast/archive/2293 [Accessed 01 06 2019].

[4] EGI, [Online]. Available at https://wiki.egi.eu/wiki/PROC12 [Accessed 01 06 2019].

[5] LCG, [Online]. Available at https://twiki.cern.ch/twiki/bin/view/LCG/WlcgIpv6 [Accessed 01 06 2019].

Page 33: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

33

SEPTEMBER 12-14, 2019

RO-LCG REPORTS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Contribution of the RO-14-ITIM site to ATLAS computing

Farcas Felix1, Trusca Radu1, Nagy Jefte1, Albert Stefan1 1 National Institute for R&D of Isotopic and Molecular Technologies (INCDTIM)

67-103 Donat str., 400293 Cluj-Napoca, Romania

Since its commissioning ten years ago, INCDTIM’s Grid site was dedicated to the computational support of LHC's ATLAS experiment, within the Romanian Tier-2 Federation. As such, in the last decade it has completed all the stages of transforming the offline computing infrastructure that have been requested by the ATLAS and WLCG collaborations in order to update and optimize the HTC services.

Following the initiative of the ATLAS Computing management for simplifying the global interaction with the Tier-2s, the site was involved in the recent implementation of structural changes within RO-LCG that were meant to increase the efficiency and lowering the operational costs. As a result, in 2017 the site became „diskless”, being exclusively dedicated to Monte Carlo simulations and event-generation jobs, that access over WAN the remote storage hosted on another site (RO-07-NIPNE).

The last 12 months marked a significant change in site functionality and other major improvements.

Thus, we first note the migration of the servers to CentOS 7 OS. Also, in order to upgrade the middleware technology, we changed it from UMD (Unified Middleware Distribution) to ARC (Advanced Resource Connector) middleware, and implemented the SLURM batch system (Simple Linux Utility for Resource Management). After finalizing this process, the monitoring service and perfSonar were updated, and the IPv6 configuration was deployed.

Currently, two queues are registered in the ATLAS (big)PanDA monitoring system: the single core queue RO-14-ITIM_ARC_CL7 and the 8-core queue RO-14-ITIM_ARC_MCORE_CL7.

This communication will summarize the recent configuration changes and improvements, as well as the overview of the results obtained at site RO-14-ITIM in the last decade.

Acknowledgement: This work was funded by the Ministry of Research and Innovation under the contract no. 6 / 2016 (program PNIII-5.2-CERN-RO) and Romanian-JINR cooperation project (Order 397-72/27.05.2019)).

Page 34: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

34

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

User-friendly expressions for the coefficients in exponential fitting

L. Gr. Ixaru1,2 1 “Horia Hulubei” National Institute of Physics and Nuclear Engineering, Department of Theoretical Physics,

P.O.Box MG-6, Bucharest, Romania email: [email protected]

2 Academy of Romanian Scientists, 54 Splaiul Independentei, 050094, Bucharest, Romania

Exponential fitting is a mathematical procedure for generating numerical methods for various operations on functions with a pronounced oscillatory or hyperbolic variation. The typical condition for generating the coefficients of such methods is that the method must be exact for a set of functions including exponential functions, see [1].

The most popular set of this type is

1, 𝑥, … , 𝑥𝐾 , 𝑒±𝜇𝑥, 𝑥𝑒±𝜇𝑥 , … , 𝑥𝑃𝑒±𝜇𝑥, (1)

where 𝜇 may be either real or imaginary, 𝜇 = 𝜔 or 𝜇 = 𝑖𝜔(𝜔 ≥ 0), respectively. This has been first introduced in [2] for the solution of the Schrödinger equation but meantime it became intensively used in many other contexts (numerical differentiation, quadrature, interpolation, solution of differential or integral equations etc.). The values of integers 𝐾 ≥ −1 and 𝑃 ≥ −1 depend on the context. 𝐾 = −1 or 𝑃 = −1 means that power functions or exponential functions are absent in (1).

For example, if the numerical method consists in the computation of the first derivative of function y(x) by the three-point formula

𝑦′(𝑥𝑖) ≈1

2ℎ[𝑎2𝑦(𝑥𝑖 + ℎ) + 𝑎1𝑦(𝑥𝑖) + 𝑎0𝑦(𝑥𝑖 − ℎ)], (2)

the number of coefficients to be determined is M = 3 and then we have two possibilities (rule M - 3 = K + 2P is satisfied for this case), P = -1; K = 2 (classical case) and P = K = 0, which furnish the coefficients a1 = 0, and a0 = -a2 with

𝑎2 = 1, 𝑎2(𝑧) =𝑧

sin(𝑧), 𝑎2(𝑧) =

𝑧

sinh(𝑧); where 𝑧 = 𝜔ℎ. (3)

The first a2, a constant, corresponds to the well known central difference approximation and it is appropriate when y(x) is a smoothly varying functions, to become exact when this is a polynomial of the second degree. The other two are z dependent, and they accommodate, in order, the cases when y(x) is an oscillatory function of form 𝑦(𝑥) = 𝑓1(𝑥) sin(𝜔𝑥) +𝑓2(𝑥) cos(𝜔𝑥) or a function with hyperbolic variation, of form 𝑦(𝑥) = 𝑓1(𝑥) sinh(𝜔𝑥) +𝑓2(𝑥) cosh(𝜔𝑥), where f1(x) and f2(x) are smoothly varying. The results obtained with these coefficients become exact when f1 and f2 are constant, irrespective of the values of 𝜔 and h.

Of acute importance is the fact that the above written z dependent analytic expressions of a2 are insufficient for an accurate numerical computation because they have the indeterminate form 0/0 at z = 0 and therefore series expansions must be used for small z. These are:

Page 35: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

35

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

𝑎2(𝑧) = 1/ [1 −1

3!𝑧2 +

1

5!𝑧4 −

1

7!𝑧6 + ⋯ ] , 𝑎2(𝑧) = 1/ [1 +

1

3!𝑧2 +

1

5!𝑧4 +

1

7!𝑧6 + ⋯ ], (4)

corresponding to the trigonometric and hyperbolic regime, respectively, such that, altogether, we have to provide four formulas to accurately determine a2(z).

The presented example is a very simple one but the problem of reducing the package of four formulas, needed to compute each coefficient, to a single one, with universal use irrespective of the fitting regime (trigonometric or hyperbolic) or of how big or small is z, is of major interest for applications. We call this a user-friendly expression of that coefficient. Many (perhaps hundreds) numerical approaches based on the exponential fitting have been developed along time but only in some of these a derivation of user-friendly expressions has been considered, notably in book [1]; see also [3, 4]. The set of special functions 𝜉(𝑍), 𝜂0(𝑍), 𝜂1(𝑍), 𝜂2(𝑍), … , 𝑍 = ∓𝑧2 for trigonometric/hyperbolic case, respectively, see [5], was used for its purpose. In many other papers the coefficients were derived by using MATHEMATICA, to result in analytic expressions in terms of trigonometric or hyperbolic sine and cosine, and quite often only in one of the two regimes. An accurate numerical evaluation of the coefficients obtained in this frame implies the extra knowledge of the corresponding series expansion formulas but, unfortunately, such expansions are not always provided in the published papers. Our presentation, based on [6], will show that producing user-friendly expressions based on a minimum of the provided material, which typically consists in the knowledge of the analytic expression of the corresponding coefficients in only one of the two regimes, is possible. More than this, the suggested algorithm for generating user-friendly expressions is rather direct and it can be programmed without difficulty. Applications will be also presented.

References

[1] Ixaru L. Gr. and Vanden Berghe G.: Exponential Fitting, Kluwer Academic Publishers, Dordrecht/Boston/London 2004.

[2] Ixaru L. Gr. and Rizea M.: A Numerov-like scheme for the numerical solution of the Schrödinger equation in the deep continuum spectrum of energies. Comput. Phys. Commun. 19, 23-27 (1980)

[3] Paternoster, B.: Present state-of-the-art in exponential fitting. A contribution dedicated to Liviu Ixaru on his 70-th anniversary. Comput. Phys. Commun. 183, 2499-2512 (2012)

[4] Conte D. and Paternoster B.: Modified Gauss-Laguerre exponential fitting based formulae, J. Sci. Comput 69:227-243 (2016)

[5] Ixaru L. Gr.: Operations on oscillatory functions, Comput. Phys. Commun. 105, 1-19 (1997)

[6] Ixaru L. Gr.: Exponential and trigonometrical fittings: user-friendly expressions for the coefficients, Numer. Algor. (2018) https://doi.org/10.1007/s11075-018-0642-8

Page 36: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

36

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Problem-dependent discretizations of evolutionary problems

Dajana Conte1, Raffaele D’Ambrosio2 and Beatrice Paternoster1

1 Department of Mathematics University of Salerno

Via Giovanni Paolo II, 132 84084 Fisciano (Sa), Italy

e-mail: {dajconte,beapat}@unisa.it 2 Department of Engineering and Computer Science and Mathematics

University of L’Aquila Via Vetoio, Loc. Coppito

67100 L’Aquila, Italy e-mail: [email protected]

The talk focuses on the numerical solution of evolutionary problems both based on ordinary and partial differential equations, by problem-dependent discretizations that take into account relevant qualitative features of the problems, in the spirit of non-polynomial fitting introduced in several pioneering works by Liviu Gr. Ixaru [5, 8].

We first adapt the method-of-lines for the discretization of partial differential equations generating periodic wavefronts. The revised scheme is based on trigonometrically fitted finite differences, exploiting the a-priori knowledge of the qualitative behaviour of the solution, gaining advantages in terms of efficiency and accuracy with respect to classical schemes mostly relying on algebraic polynomials. The developed finite differences are exact on trigonometrical basis functions, coupled with an Implicit-Explicit (IMEX) time-integration [3]. The coefficients of the resulting scheme depend on unknown parameters to be properly estimated: such an estimate is performed by an efficient a-priori minimization of the leading term of the local error [4]. The effectiveness of the approach is confirmed by a selection of numerical experiments.

We next consider the adapted discretization of nonlinear differential problems by means of Jacobian-dependent Runge-Kutta schemes, following Ixaru [6, 7]. Such an adaptation consists in taking into account how the error in the internal stages of a Runge-Kutta method propagates in the numerical solution computed in the grid points. The adapted scheme depends on the Jacobian of the vector field in the current grid point and compensates the contamination of the final stage by the errors produced in the internal stages. The modified technique is compared with the classical one and shows the improvement in the stability regions of the revised scheme vs the classical one [1, 2].

References

[1] R. D'Ambrosio, L. Gr. Ixaru, B. Paternoster, Construction of the EF-based Runge-Kutta methods revisited, Comput. Phys. Commun. 182, 322-329 (2011).

Page 37: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

37

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

[2] R. D'Ambrosio, B. Paternoster, G. Santomauro, Revised exponentially fitted Runge-Kutta-Nystrom methods, Appl. Math. Lett. 30, 56-60 (2014).

[3] R. D'Ambrosio, M. Moccaldi, B. Paternoster, Adapted numerical methods for advection-reaction-diffusion problems generating periodic wavefronts, Comput. Math. Appl. 74(5), 1029-1042 (2017).

[4] R. D'Ambrosio, M. Moccaldi, B. Paternoster, Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems, Comput. Phys. Commun. 226, 55-66 (2018).

[5] L. Gr. Ixaru, G. Vanden Berghe, Exponential Fitting, Springer Netherlands (2004).

[6] L. Gr. Ixaru, Runge–Kutta method with equation dependent coefficients, Comput. Phys. Commun. 183(1), 63-69 (2012).

[7] L. Gr. Ixaru, Runge–Kutta methods with equation dependent coefficients, Lecture Notes Comput. Sci., 327-336 (2013).

[8] B. Paternoster, Present state-of-the-art in exponential fitting. A contribution dedicated to Liviu Ixaru on his 70th birthday, Comput. Phys. Commun. 183(12), 2499-2512 (2012).

Page 38: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

38

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Quantum-quasiclassical model for calculation of resonant processes in hybrid atom-ion traps

Vladimir Melezhik1,2 1 Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna Moscow Region 141980,

Russian Feredration 2 Peoples' Friendship University of Russia (RUDN University), Miklukho-Maklaya St. 6, Moscow 117198,

Russian Federation

In recent years, there has been a rapidly growing interest in ultracold hybrid atomic-ion systems. It is caused by the new opportunities opening here for modeling various quantum system and processes with controllable properties. Particularly, in the paper of Melezhik and Negretti [1] the confinement-induced resonances (CIRs) in ultracold hybrid atom-ion systems were predicted.

The prediction was done in the “static approximation” for the ion. This approximation was also used in recent paper [2] where CIRs in two-center problem were analysed in pseudopotential approach. However, going beyond the “static approximation” is a hot problem due to principally unavoidable effect of the ion “micromotion” in the Paul trap [3].

To adequately describe the atom-ion dynamics in the hybrid atom-ion trap we developed a quantum-quasiclassical approach. This computational scheme is based on the quantum-quasiclassical model which was successfully used earlier for solving a number of problems of collision of quantum particles [4], in particular, in the problem of ionization of the helium ion in collisions with protons [5]. In this computational scheme, the time-dependent Schrödinger equation, describing collisional atom dynamics in a waveguide-like trap, is integrated simultaneously with the classical Hamilton equations for the ion motion in a linear Paul trap. At that, the three-dimensional Schrödinger equation is coupled with the six classical Hamilton equations during the confined atom-ion collision.

Two technical (but crucial) problems were resolved that were necessary for the implementation of this computational scheme. A robust algorithm has been developed for calculating the temporal evolution of the ion trajectory rI(t), which is experiencing a perturbation in a Paul trap when colliding with an atom. As well, to calculate the matrix element of the atom-ion interaction a regularized potential was used instead of the polarization potential C12-C4 singular with r(t) = IrA - rI(t)I → 0. This approach has permitted quantitative investigation of the atom-ion confined collisions near the CIR. What enabled us to calculate the dependence of the ratio ratio ap/ as at the point of atom-ion CIR on the ion energy. Here, ap is the linear dimension of the atomic trap and as is an atom-ion s-wave scattering length in free space.

Page 39: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

39

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The computations were performed for two kinds of the ion confining trap. First, we have considered the effective trap with the time independent frequencies. Afterward, we have evaluated the effect of the ion “micromotion” on the CIRs by including oscillating term in the Paul trap. It was shown that the confined motion (and “micromotion”) of the ion does not destroy the CIR. The shift of the CIR position as a function of the mean transversal and longitudinal ion energy were calculated. We also suppose to discuss description with the developed approach heading/cooling process in the confined atom-ion systems. It is important problem for planning controllable experiments with such systems [3].

This work was supported by the Russian Foundation for Basic Research, Grant No. 18-02-00673 and the “RUDN University Program 5-100”.

References:

[1] V.S. Melezhik and A. Negretti, Phys. Rev. A94 (2016) 022704-1--8.

[2] S. Shadmehri and V.S. Melezhik, Phys. Rev. A99 (2019) 032705-1--11.

[3] D. Liebfried, R. Blatt, C. Monroe, and D. Wineland, Rev. Mod. Phys., 75 (2003) 281–234.

[4] V.S. Melezhik and L.A. Sevastianov, Analytical and Computational Methods in Probability Theory, Lecture Notes in Computer Science 10684 (2017) 449-458.

[5] V.S. Melezhik, J.S. Cohen, and C.Y. Hu, Phys. Rev. A69 (2004) 032709-1–13.

Page 40: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

40

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Adapted two step peer methods for advection diffusion problems

Dajana Conte1, Fakhrodin Mohamadi2, Leila Moradi2, Beatrice Paternoster1 1 Department of Mathematics, University of Salerno, Italy

2 University of Hormozgan, Iran

We consider advection-diffusion problems whose solution exhibits an oscillatory behaviour, such as the Boussinesq equation [1, 11]

𝜕ℎ

𝜕𝑡=

𝐾

𝑆(ℎ

𝜕2ℎ

𝜕𝑥2 + (𝜕ℎ

𝜕𝑥)

2

− 𝜗𝜕ℎ

𝜕𝑥),

where h is the height of the water table, S is the drainable porosity, K is the hydraulic conductivity and 𝜗 is the slope of the impervious base. Such equation is important in dynamic interactions between aquifers and the sea in coastal regions and describes groundwater flows on a sloping

impervious base. If h shows a small deviation from the weighted depth, by setting 𝛾 =𝑇

𝑆,

𝜈 = 𝐾𝜗

𝑆, where T is the transmissivity, the model assumes the form [1, 11]:

ℎ𝑡(𝑥, 𝑡) = 𝛾ℎ𝑥𝑥(𝑥, 𝑡) − 𝜈ℎ𝑥(𝑥, 𝑡), (𝑥, 𝑡) ∈ (0, 𝑋) × (0, 𝑇)

ℎ(𝑥, 0) = ℎ0(𝑥), 𝑥 ∈ [0, 𝑋], (1)

ℎ(0, 𝑡) = ℎ(𝑋, 𝑡) = 𝑓(𝑡), 𝑡 ∈ [0, 𝑇],

where h0(x) is the initial watertable and f(t) is the periodic vertical perturbation relative to the mean sea level due to tidal waves. If Boussinesq equation is subject to the periodic boundary condition

𝑓(𝑡) = exp(𝑖𝜔𝑡), (2)

the solution exhibits oscillations both in space and in time, as it assumes the following form:

ℎ(𝑥, 𝑡) = exp [(𝜈

2𝐷− 𝜇) 𝑥] exp[𝑖(𝜔𝑡 − 𝜌𝑥)]

where 𝜇 =1

2𝛾√2𝛾√𝜔2 +

𝜈4

16𝛾2+

𝜈2

2𝛾, 𝜌 =

1

2𝛾√2𝛾√𝜔2 +

𝜈4

16𝛾2−

𝜈2

2𝛾.

The semi-discretization in space of such equation gives rise to a system of ordinary differential equations

𝑦′(𝑡) = 𝑓(𝑡, 𝑦(𝑡)), 𝑦(𝑡0) = 𝑦0, 𝑡 ∈ [𝑡0, 𝑇], 𝑓: ℝ × ℝ𝑑 → ℝ𝑑, (3)

whose dimension d depends on the number of spatial points.

In order to develop efficient and accurate numerical methods we propose an adapted numerical integration based on exploiting a-priori known information about the behavior of the exact solution, by means of exponential fitting strategy, whose greatest exponent is Liviu Gr. Ixaru (see the monograph [9]). To him we owe in fact valuable results on the error, the

Page 41: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

41

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

construction of new methods, the innovative estimation of the parameters and so on: see for instance [2, 3, 7, 8, 9], just to cite some contributions.

We present a general class of EF two step peer methods for the numerical integration of ordinary differential equations (3) having oscillatory solutions [4, 5]:

𝑌𝑛+1 = (𝐵(𝜔)⨂𝐼𝑑)𝑌𝑛 + ℎ(𝐴(𝜔)⨂𝐼𝑑)𝐹(𝑌𝑛) + ℎ(𝑅(𝜔)⨂𝐼𝑑)𝐹(𝑌𝑛+1)

where 𝑌𝑛 = [𝑌𝑛𝑖]𝑖=1

𝑠 , 𝐹(𝑌𝑛) = [𝑓(𝑡𝑛𝑖, 𝑌𝑛𝑖)]𝑖=1𝑠

𝑌𝑛𝑖 ≈ 𝑦(𝑡𝑛𝑖), 𝑡𝑛𝑖 = 𝑡𝑛 + 𝑐𝑖ℎ, 𝑖 = 1, … , 𝑠 𝑛 = 0, … , 𝑁 − 1 𝑐𝑠 = 1

and w = ωh, where ω ∈ ℝ is the oscillating frequency of the problem. The matrices A(w), B(w), R(w) are determined by imposing that the peer method is exact when the solution belongs to the fitting space with 𝜇 = i𝜔.

The attribute peer means that all s stages have the same good accuracy and stability properties, therefore they are quite efficient for stiff problems, as they do not suffer from the order reduction phenomenon [10]. Moreover peer methods are very suitable for a parallel implementation, which may be necessary when the number of spatial points increases. As regards the spatial semidiscretization of the problem we adopt adapted finite differences [1, 6].

The effectiveness of this problem-oriented approach is shown through numerical tests. This work has been supported by GNCS-INDAM.

References [1] A. Cardone, R. D’Ambrosio, B. Paternoster, Exponentially fitted IMEX methods for advec- tion–diffusion problems, J. Comput. Appl. Math (316), 100–108, 2017. [2] J.P. Coleman, L. Gr. Ixaru, P-stability and exponential-fitting methods for y1 = f (x, y), IMA J. Numer. Anal., 16 (2), 179-199, 1996. [3] J.P. Coleman, L. Gr. Ixaru, Truncation errors in exponential fitting for oscillatory problems, SIAM J. Numer. Anal., 44 (4), 1441-1465, 2006. [4] D. Conte, R. D’Ambrosio, M. Moccaldi, B. Paternoster, Adapted explicit two-step peer me- thods, J. Numer. Math., in press. [5] D. Conte, L. Moradi, B. Paternoster, Adapted implicit two-step peer methods, in preparation. [6] R. D’Ambrosio, B. Paternoster, Numerical solution of a diffusion problem by exponentially fitted finite difference methods, SpringerPlus 3:425, 2014. [7] L. Gr. Ixaru, Operations on Oscillatory Functions, Comput. Phys. Commun. 105, 1–19, 1997. [8] L. Gr. Ixaru, Exponential and trigonometrical fittings: user-friendly expressions for the coefficients, Numer. Algor. in press. [9] L. Gr. Ixaru, G. Vanden Berghe, Exponential Fitting, Kluwer, Boston-Dordrecht-London, 2004. [10] B. A. Schmitt, R. Weiner, Parallel two-step W-methods with peer variables, SIAM J. Numer. Anal. 42, 265–282, 2004. [11] N. Su, F. Liu, V. Anh, Tides as phase-modulated waves inducing periodic groundwater flow in coastal aquifers overlaying a sloping impervious base, Environmental Modelling & Software 18, 937–942, 2003.

Page 42: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

42

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

A tribute to Liviu Ixaru, developer of successful CP-algorithms

Marnix Van Daele1, Toon Baeyens1 1 Dept. Applied Math., Comp. Science and Statistics, Ghent University, Krijgslaan 281- Gebouw S9, B9000 Gent

The impact of Liviu Ixaru and his work on the research group Numerical mathematics at Ghent University can not be overestimated: since almost 25 years the research group has been inspired by his ideas. From about 1995 to 2005 Liviu Ixaru was a regular guest at Ghent University and in this period, there was a very intensive and fruitful collaboration with Guido Vanden Berghe, Hans De Meyer and Marnix Van Daele. There was a common interest in so-called exponential-fitting methods, which were developed for problems whose solutions exhibit exponential or highly oscillatory behavior. Originally much attention was devoted to the Schrödinger equation, later the focus was on the more general class of Sturm-Liouville-problems. Together, they developed the Fortran code SLCPM12 [1]. The key part of the algorithm is based on Piecewise Constant Perturbation methods and was developed by Ixaru.

This code was the basis for the Ph.D. work of Veerle Ledoux, which resulted in 2005 in Matslise [2]: a Matlab package for the numerical solution of Sturm-Liouville and Schrodinger equations. A well chosen name, as it was a nice coincidence that the letters L and I in Matslise can also refer to Lixiu Ixaru! Ledoux, Vanden Berghe and Van Daele also developed other Matlab-codes such as Matscs for solving coupled channel Schrodinger-equations [3]. This Matlab code was based on Ixaru's LILIX code [4]. Also the Matslise-package was further improved and in 2016, Ledoux and Van Daele brought a new release Matslise 2.0 [5]. Meanwhile the success of the CP-algorithms had also lead to algorithms for solving other types of problems, such as two-dimensional Schrödinger problems [6] and time-dependent Schrödinger problems [7].

However, as the CP-algorithms were mainly tuned for solving one-dimensional Schrödinger problems, the algorithms for the two-dimensional and time-dependent problems presented in [6] and [7] were, from a computational point of view, not optimized. Therefor, they have been redesigned and reimplemented in C++ by Toon Baeyens so that they can be called from within Python. This results in a speedup factor of (roughly speaking) 100. The (simplified) Python version of Matslise is called Pyslise.

In the present talk, we will focus on the problem Ixaru considered in [6]: the solution of the 2D Schrödinger-problem

𝜕2𝜓

𝜕𝑥2+

𝜕2𝜓

𝜕𝑦2= (𝑉(𝑥, 𝑦) − 𝐸)𝜓

over a rectangle [𝑥min, 𝑥max] × [𝑦min, 𝑦max] with Dirichlet boundary condition 𝜓(𝑥, 𝑦) = 0.

First a partition 𝑦min = 𝑦0 < 𝑦1 < ⋯ < 𝑦𝐾 = 𝑦max is constructed and the region of integration is divided into sectors 𝑆𝑘 = [𝑥min, 𝑥max] × [𝑦k, 𝑦k+1], 𝑘 = 0, … 𝐾 − 1. In each sector the solution of the 2D-problem is then expressed as an expansion (truncated after N terms) over eigenfunctions from a 1D Hamiltonian obtained by approximating the 2D-potential V(x,y) by a function V[k](x). The eigenfunctions of these 1D problems can very efficiently be obtained with Pyslise-algorithms.

Given the solution at the beginning of sector k, one then calculates how the solution evolves between yk-1 and yk. This computation involves the solution of an N-dimensional system of first order ODEs. The solution of this ODE system gives the coefficients of the expansion.

Once we know how to propagate the solution from one sector to another, we can use a shooting procedure to propagate the solution from y0 to yM (0 ≤ M ≤ K - 1) and from yK backward

Page 43: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

43

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

to yM. At yM, both solutions should match smoothly. This only happens when E is an eigenvalue. The problem that needs to be solved is exactly that of the coupled multi-channel Schrödinger equation. To do so, an optimized and more accurate (order 10 instead of order 6) version of the Matscs-algorithm is used.

λ K = 11 K = 15 K = 19 log1511

(|∆15/∆11|) log1915

(|∆19/∆15|)

3.195918085 -2.7 10-6 -1.4 10-7 -1.6 10-8 9.48 9.37 5.526743851 -1.9 10-5 -7.6 10-7 -5.1 10-8 10.39 11.41 7.557803323 -4.6 10-5 -1.3 10-6 -7.6 10-8 11.43 12.12 8.031272338 1.4 10-5 -2.0 10-7 -5.4 10-10 13.60 25.04 8.444581367 -5.1 10-5 -3.5 10-6 -4.0 10-7 8.65 9.19 9.928061089 -2.8 10-5 -1.3 10-6 -2.5 10-7 9.81 7.14

11.311817036 -1.1 10-4 -8.0 10-6 -7.7 10-7 8.36 9.91 12.103254772 -5.9 10-5 9.3 10-6 4.3 10-6 5.96 3.23 12.201179225 -2.4 10-5 9.9 10-6 4.6 10-6 2.80 3.23 13.332331427 -3.3 10-4 -1.3 10-5 -1.5 10-6 10.37 9.27 14.348268555 1.9 10-5 -4.8 10-8 -7.9 10-8 19.33 -2.15 14.450478663 -1.3 10-5 -8.4 10-6 -8.7 10-7 1.36 9.58 14.580554845 -2.4 10-6 -8.8 10-6 -1.9 10-6 -4.20 6.45

Table 1: Numerical values of the first 13 eigenvalues, their errors (for the cases K = 11, K = 15 and K = 19) and an estimation of the order of the overall method.

As an illustration, we consider Ixaru's test problem

V(x,y) = (1 + x2)(1 + y2) -5.5 ≤ x,y ≤ 5.5

and display estimations for the errors in the first eigenvalues for the case N = 12 with K = 11, K = 15 and K = 19 sectors that are all taken in one step (the results from a run with N = 12 over K = 31 sectors and 3 steps per sector are taken as a reference). In order to have an idea about the order p of the overall method, we have compared the numerical errors ∆𝐾 obtained with different values of K. If ∆𝐾≈ 𝐶ℎ𝐾

𝑝 is obtained with step size hK, then ∆𝐾1/∆𝐾2

≈ (𝐾2/𝐾1)𝑝. From the last two columns, we do see that for the lowest eigenvalues the estimated value of p is in the neighborhood of 10.

References [1] L.Gr. Ixaru, G. Vanden Berghe, H. De Meyer, SLCPM12 - A program for solving regular Sturm-Liouville problems, Comp. Phys. Comm. 118 (1999) 259-277. [2] Ledoux MATSLISE: A MATLAB package for the numerical solution of Sturm-Liouville and Schrödinger equations, ACM Trans. Math. Softw. 31, 4, (2005) 532-554. [3] V. Ledoux, M. Van Daele and G. Vanden Berghe, A numerical procedure to solve the multichannel Schrödinger eigenvalue problem, Comp. Phys. Comm. 176 (2007) 191-199. [4] L.Gr. Ixaru, LILIX - A package for the solution of the coupled channel Schrödinger equation, Comp. Phys. Comm. 147 (2002) 834-852. [5] Ledoux, V. and Van Daele, M. Matlise 2.0: A Matlab Toolbox for Sturm-Liouville Computations, ACM Trans. Math. Softw. 42, 4, Article 29 (June 2016). [6] L. GR. Ixaru, New numerical method for the eigenvalue problem of the 2D Schrödinger equation, Comput. Phys. Commun. 181 (2010) 1738-1742. [7] V. Ledoux, M. Van Daele, The accurate numerical solution of the Schrödinger equation with an explicitly time-dependent Hamiltonian, Comput. Phys. Commun. 185 (2014) 1589-1594

Page 44: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

44

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Solving Quasi-exactly solvable differential equation by a canonical polynomials approach

Mohamad K. El-Daou1 1 College of Technological Studies, Kuwait

A quasi exact solvable (QES) model refers to any second order differential equation with polynomial coefficients of the form

(𝐷𝑦)(𝑥) = 𝐴(𝑥)𝑑2𝑦

𝑑𝑥2+ 𝐵(𝑥)

𝑑𝑦

𝑑𝑥+ 𝐶(𝑥)𝑦(𝑥) = 0 (1)

𝐴(𝑥) = ∑ 𝐴𝑖𝑥𝑖𝑝+2𝑖=0 , 𝐵(𝑥) = ∑ 𝐵𝑖𝑥

𝑖𝑝+1𝑖=0 , 𝑎𝑛𝑑 𝐶(𝑥) = ∑ 𝐶𝑖𝑥

𝑖𝑝𝑖=0 .

where a pair of exact polynomials {y(x), C(x)} with respective degrees {deg[y] = n, deg[C] = p} are to be found simultaneously in terms of the coefficients {Ai, Bi} of two given polynomials {A(x), B(x)}.

QES problems have applications in engineering, chemistry, physics and many other fields. This includes the math- ematical settings that involve Schrödinger equations describing problems in quantum mechanics such as anhar- monic singular potentials, Coulombically repelling electrons on a multidimensional sphere, Planer Dirac electron in magnetic fields and Kink stability analysis among many other problems.

Different techniques can be used to solve QES problems: The Functional Bethe Ansatz method, asymptotic itera- tion method and the Lie algebraic approach among many other techniques (see [1], [2], [3]). With the Functional Bethe Ansatz method, for example, one seeks 𝑦(𝑥) = ∏ (𝑥 − 𝜉𝑖)

𝑛𝑖=1 expressed in terms of its n distinct roots {ξi} which are determined by

solving an n × n system of nonlinear algebraic equations. Thereafter, the coefficients of C(x) are calculated in terms of {ξi}. If one wishes to increase the order n of polynomial y(x), then a new nonlinear algebraic system of higher dimensions has to be resolved.

In this paper, we propose an alternative method that seeks 𝑦(𝑥) = 𝑥𝑛 + ∑ 𝜏𝑖𝑄𝑛+𝑖(𝑥)𝑝+1𝑖=0

expressed in terms of a special polynomial basis, {Qi(x)} called canonical polynomials associated with D which are defined below. With this method, the p + 1 coefficients {Ci; i = 0, 1, ..., p} are computed first by solving a (p + 1) × (p + 1) system of nonlinear algebraic equations. Then the unknown coefficients τi are calculated by direct substitution. Unlike the Functional Bethe Ansatz method, if the desired order n of y(x) is increased, the dimensions of the nonlinear algebraic system remains unchanged.

Definition. For any integer k ≥ 0, 𝑄𝑘∗(𝑥) is called a kth canonical function of D if 𝑄𝑘

∗(𝑥) is an exact solution of the differential equation 𝐷𝑄𝑘

∗ = 𝑥𝑘.

From [4], {𝑄𝑘∗(𝑥)} can be generated by the following recursion:

𝑄𝑘+𝑝∗ =

1

𝑓𝑝(𝑘) {𝑥𝑘 − ∑ 𝑓𝑖−2

(𝑘)𝑄𝑘+𝑖−2

∗𝑝+1𝑖=0 }, 𝑘 ≥ 0 (2)

Page 45: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

45

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

where 𝑓𝑖−2(𝑘)

∶= 𝑘(𝑘 − 1)𝐴𝑖 + 𝑘𝐵𝑖−1 + 𝐶𝑖−2 for any 𝑖 ≥ 0 with the convention that 𝐴𝑖 = 𝐵𝑖 =

𝐶𝑖 = 0 when 𝑖 < 0.

Working out (2), it can be shown that for all k ≥ 0, 𝑄𝑘+𝑝∗ can be written as

𝑄𝑘+𝑝∗ = 𝑄𝑘+𝑝 + 𝑅𝑘+𝑝

where {Qk; k ≥ 0} are called canonical polynomials given by the recursion:

𝑄𝑘+𝑝 =1

𝑓𝑝(𝑘) {𝑥𝑘 − ∑ 𝑓𝑖−2

(𝑘)𝑄𝑘+𝑖−2

𝑝+1𝑖=0 }, 𝑘 ≥ 0 (3)

with Qj = 0 for all j = 0, 1, ..., p − 1. And

𝑅𝑘+𝑝 = ∑ 𝑟𝑘+𝑝(𝑗)

𝑄𝑗∗𝑝−1

𝑗=0

where {𝑟𝑘(𝑗)

; k ≥ 0} is a sequence of constants given by the recursion

𝑟𝑘+𝑝(𝑗)

=1

𝑓𝑝(𝑘) ∑ 𝑓𝑖−2

(𝑘)𝑟𝑘+𝑖−2

(𝑗)𝑝+1𝑖=0 , 𝑘 ≥ 0 (4)

for each j = 0, 1, 2, ..., p − 1, with 𝑟𝑖(𝑗)

= 𝛿𝑖𝑗 for all i and j = 0, 1, ..., p − 1.

We state the main result:

Theorem. Let 𝐴(𝑥) = ∑ 𝐴𝑖𝑥𝑖𝑝+2𝑖=0 𝑎𝑛𝑑 𝐵(𝑥) = ∑ 𝐵𝑖𝑥

𝑖𝑝+1𝑖=0 be given polynomials of degree p

+ 2 and p + 1 respectively, and suppose that 𝐶(𝑥) = ∑ 𝐶𝑖𝑥𝑖𝑝

𝑖=0 is an unknown polynomial of

degree p ≥ 0. Let n ≥ 0. If {C0, C1,..., Cp} satisfy the following system of algebraic equations

𝐶𝑝 = −𝑛(𝑛 − 1)𝐴𝑝+2 − 𝑛𝐵𝑝+1 (5)

∑ (𝑛(𝑛 − 1)𝐴𝑖 + 𝑛𝐵𝑖−1 + 𝐶𝑖−2)𝑟𝑛+𝑖−2(𝑙)

= 0𝑝+1𝑖=0 , 𝑙 = 0,1,2, … , 𝑝 − 1 (6)

where {𝑟𝑖(𝑙)

} are given by (4), then

𝑦(𝑥) = 𝑥𝑛 + ∑ 𝜏𝑖(𝑛)

𝑄𝑛+𝑖−2𝑝+1𝑖=0 with 𝜏𝑖

(𝑛)= −𝑛(𝑛 − 1)𝐴𝑖 − 𝑛𝐵𝑖−1 − 𝐶𝑖−2 (7)

is an exact polynomial solution for the differential equation (1), where {Qk} are given by (3).

References

[1] R. Sasaki, W .L. Yang and Y. Z. Zhang. Bethe ansatz solutions to quasi-exactly solvable difference equations, SIGMA 5, 104 (16 pages), 2009.

[2] A. Moroz and A. E. Miroshnichenko. Constraint polynomial approach – an alternative to the functional Bethe Ansatz method? arXiv:1807.11871v1, July 2018

[3] N. Hatami and M. R Setare. Exact solutions for a class of quasi-exactly solvable models: A unified treatment, M.R. Eur. Phys. J. Plus (2017) 132: 311. https://doi.org/10.1140/epjp/i2017-11569-6

[4] E. L. Ortiz. The Tau Method. SIAM J. Numer. Anal. 6(3):480-492, 1969.

Page 46: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

46

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Finite Element Method and Programs for Investigation of Quantum Few-Body Systems

S.I. Vinitsky1,2, G. Chuluunbaatar1,2, O. Chuluunbaatar1,3, A.A. Gusev1,4, R. G. Nazmitdinov1,4, P. W. Wen1,5, L.L. Hai6, V.L. Derbov7, P.M. Krassovitskiy8, and A. Góźdź9

1 Joint Institute for Nuclear Research, Dubna, Russia 2 Peoples' Friendship University of Russia Moscow, Russia

3 Institute of Mathematics National University of Mongolia, Ulaanbaatar, Mongolia 4 Dubna State University, Dubna, Russia

5 China Institute of Atomic Energy, Beijing, China 6 Ho Chi Minh city University of Education, Ho Chi Minh city, Vietnam

7 Saratov State University, Saratov, Russia 8 Institute of Nuclear Physics, Almaty, Kazakhstan

9 Institute of Physics University of M. Curie-Skłodowska, Lublin, Poland

An algorithmic approach to the construction of schemes of the finite element method of high accuracy and the Kantorovich method - reduction to a system of ordinary differential equations for solving multidimensional boundary-value problems for Schrödinger equation and investigation the quantum system of several particles is developed [1-3]. The operability of the constructed computing schemes, the created numerical and symbolic (computer-algebraic) algorithms and problem-oriented program complexes realizing them is confirmed by a numerical analysis of exact solvable and reference tasks with the known solutions and also physically interesting configurations and resonant processes possible in the quantum systems of several particles: photo-absorbtion in ensembles of the axial-symmetric quantum dots [4], a Coulomb scattering of an electron in the homogeneous magnetic field and a photo-ionization of atom of Hydrogen [5], scattering of a diatomic molecule on atom or a potential barrier [6], tunneling of an cluster from several identical quantum particles through potential barriers or wells [7,8].

In our current work the fusion of two nuclei that occurs at strong coupling of their relative motion to surface vibrations is analyzed. To this aim a new efficient finite element method, that improves the KANTBP code [9-11], is used to solve numerically coupled-channels equations with orthogonalisation of the closed coupled channels in the asymptotic region [12]. With the aid of this method, the important role of boundary conditions, corresponding to the total absorption (e.g.,[13,14]), is shown. A comparison of the presented results with available experimental data demonstrates the advantage of the modified KANTBP code with respect to the wildly used numerical method, known in literature as the CCFULL [15]. The deep sub-barrier fusion cross sections of some reaction systems have been successfully described. It is confirmed that multiphonon excitations play important role in the description of the spectroscopic factor [16-18].

Page 47: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

47

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The work was partially supported by the RFBR (grant No. 18-51-18005), the Bogoliubov-Infeld and Hulubei-Meshcheryakov programs, the RUDN University Program 5-100, grant of Plenipotentiary of the Republic of Kazakhstan in JINR, and Ho Chi Minh city University of Education (grant CS.2018.19.50).

References

[1] A.A. Gusev et al. // Lect. Notes Computer Sci. 2017. V. 10490. P. 134.

[2] A.A. Gusev et al. // Lect. Notes Computer Sci. 2017. V. 10490. P. 151.

[3] A.A. Gusev et al. // Lect. Notes Computer Sci. 2018. V. 11077. P. 197.

[4] A.A. Gusev et al. // Proc. SPIE 2018. V. 10717. P. 1071712.

[5] A.A. Gusev // Bull. PFUR. Ser. Math. Inf. Sci. Phys. 2014. No. 2. P. 93.

[6] A.A. Gusev et al. // Phys. Atom. Nucl. 2018. V. 81. P. 911.

[7] A.A. Gusev et al. // Acta Physica Polonica B Proc. Suppl. 2017 V. 10, P. 269.

[8] A.A. Gusev et al. // Phys. Atom. Nucl. 2014. V. 77. P. 389.

[9] O. Chuluunbaatar et al. // Comput. Phys. Commun. 2007. V. 177. P. 649.

[10] O. Chuluunbaatar et al. // Comput. Phys. Commun. 2008. V. 189. P. 685.

[11] O. Chuluunbaatar et al. // Comput. Phys. Commun. 2014. V. 185. P. 3341.

[12] A.A. Gusev et al. // Bull. PFUR. Ser. Math. Inf. Sci. Phys. 2016. No. 3. P. 38.

[13] V.I. Zagrebaev et al. // Phys. Atom. Nucl. 2004. V. 67. P. 1462.

[14] V.V. Samarin et al. // Nucl. Phys. A. 2004. V. 734. P. E9.

[15] K. Hagino et al. // Comput. Phys. Commun. 1999. V. 123. P. 143.

[16] B.B. Back et al. // Rev. Mod. Phys. 2014. V. 86. P. 317.

[17] K. Hagino et al. // Phys. Rev. C 2018. V. 97. P. 034623.

[18] P.W. Wen et al., in book of abstracts of LXIX International Conference ``NUCLEUS-2019'', Dubna, Russia, 1--5 July, 2019, p. 294.

Page 48: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

48

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Algorithms for Generating in Analytical Form Interpolation Hermite Polynomials in Hypercube

A.A. Gusev1, G. Chuluunbaatar1,2, O. Chuluunbaatar1,3, S.I. Vinitsky1,2, L.L. Hai4, T.T. Lua4, V.L. Derbov5, P.M. Krassovitskiy6, A. Góźdź7 1 Joint Institute for Nuclear Research, Dubna, Russia

2 Peoples' Friendship University of Russia Moscow, Russia 3 Institute of Mathematics National University of Mongolia, Ulaanbaatar, Mongolia

4 Ho Chi Minh city University of Education, Ho Chi Minh city, Vietnam 5 Saratov State University, Saratov, Russia

6 Institute of Nuclear Physics, Almaty, Kazakhstan 7 Institute of Physics University of M. Curie-Skłodowska, Lublin, Poland

We present a new symbolic-numeric algorithm implemented in Maple for constructing Hermitian finite elements in a standard d-dimensional cube that generalizes a local tricubic interpolation scheme in three dimensions that is C1 proposed in Ref. [1]. Our construction is implemented without using explicit inverse matrices in the solution of the algebraic problem for a set of algebraic equations with respect to unknown coefficients of polynomials of d variables. The algorithm yields explicit expressions in analytical form for the interpolation Hermite polynomials (IHPs). The basis functions of finite elements are high-order polynomials, determined from a specially constructed set of values of the polynomials themselves and their partial derivatives up to the order 𝐾max − 1 in vertices. Such a choice of values allows us to construct a piecewise polynomial basis 𝐶𝐾max−1 continuous on the boundaries of finite elements together with the derivatives up to the required order. In the case of a d-dimensional cube, it is shown that the basis functions are determined by products of d IHPs of the order of p’ depending on each of the d variables given in the analytical form with continuous partial derivatives up to the order 𝐾max − 1 on boundaries of finite elements [2]. Using this fact we propose a new symbolic algorithm implemented in Maple for calculating in analytical form the basis functions, i.e. IHPs of d variables with continuous partial derivatives up to the order 𝐾max − 1 on the faces of the standard d hypercube.

Algorithm. The IHPs of d variables in d-dimensional cube with unit side which preserve the continuity of the piecewise polynomial and their derivatives by each of independent variables up to the order 𝐾max − 1 in 2d vertices

𝜕𝐾1

′ +...+𝐾𝑑′

𝜑𝑟1…𝑟𝑑

𝐾1…𝐾𝑑(𝑥1,…,𝑥𝑑)

𝜕𝑥1

𝐾1′

…𝑥𝑑

𝐾𝑑′ |

(𝑥1,…,𝑥𝑑)=(𝑥1′ ,…,𝑥𝑑

′ )

= 𝛿𝑥1𝑥1

′ … 𝛿𝑥𝑑𝑥𝑑

′ 𝛿𝐾1𝐾1

′ … 𝛿𝐾𝑑𝐾𝑑

′ , (1)

are calculated in the analytical form

𝜑𝑟1…𝑟𝑑

𝐾1…𝐾𝑑(𝑥1, … , 𝑥𝑑) = ∏ 𝜑𝑟𝑠

𝐾𝑠(𝑥𝑠)𝑑𝑠=1 , (2)

as a product of one dimensional IHPs, 𝜑𝑟𝑠

𝐾𝑠(𝑥𝑠), of the order of p’ = 𝐾max(𝑝 + 1) − 1, where

rs = 0,...,p is the number of node belonging to unit interval, 𝐾𝑠, 𝐾𝑠′ = 0, … , 𝐾max − 1, 𝑠 = 1, … , 𝑑.

Page 49: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

49

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The HIPs are calculated in analytical form using the symbolic algorithm [2] implemented in Maple.

For 𝐾max = 2, 𝑝′ = 2𝑝 + 1 the one-dimensional IHPs read as:

𝜑𝑟𝐾𝑠=0(𝑥) = (1 − (𝑥 − 𝑥𝑟) ∑

2

𝑥𝑟−𝑥𝑟′

𝑝

𝑟′=0,𝑟′≠𝑟) ∏ (

𝑥−𝑥𝑟′

𝑥𝑟−𝑥𝑟′

)2

𝑝

𝑟′=0,𝑟′≠𝑟, (3)

𝜑𝑟𝐾𝑠=1(𝑥) = (𝑥 − 𝑥𝑟) ∏ (

𝑥−𝑥𝑟′

𝑥𝑟−𝑥𝑟′

)2

𝑝

𝑟′=0,𝑟′≠𝑟, (4)

where p is the number of divisions of the interval, xr are nodes of the IHP.

Figure. The discrepancy 𝛿𝐸𝑖 = 𝐸𝑖ℎ − 𝐸𝑖 between the calculated

eigenvalues 𝐸𝑖ℎ of the Helmholtz problem for a three-

dimensional cube with the side and their exact values Ei = 0[1]1[3]2[3]3[1]4[3]5[6]6[3]8[3]9[6]10[6]11[3]. The multiplicity of degeneracy is indicated in square brackets (right). Calculations were performed using FEM with IHPs of the third order (M (1000)) of cubic elements and interpolation Lagrange polynomials of 3rd-order (T3 (1000)), 4th-order (T4 (729)), and 5th-order (T5 (1331)) of tetrahedron elements determined in [3]. The length of eigenvector is pointed out in parentheses.

The method can be used to solve elliptic boundary value problems by means of the high-accuracy finite element method. Its advantages are the reduced computational cost and the availability of accurate derivatives of the function interpolated. The present study is motivated by possible application of finite element method on simplexes [3] for solving a boundary value problem in the collective nuclear model with tetrahedral symmetry [4], as well as by other applications in different fields, e.g., flow dynamics in unsteady fluid systems [5] and so on.

The work was partially supported by the RFBR (grant No. 18-51-18005), the Bogoliubov- Infeld program, the RUDN University Program 5-100, grant of Plenipotentiary of the Republic of Kazakhstan in JINR, and Ho Chi Minh city University of Education (grant CS.2018.19.50).

References

[1] Lekien F., and Marsden J. Tricubic interpolation in three dimensions// Int. J. Numer. Meth. Engng. 2005. Vol. 63, P. 455471. [https://github.com/nbigaouette/libtricubic]

[2] Gusev A. A., Chuluunbaatar O., Vinitsky S. I., et al. Symbolic-Numerical Solution of Boundary-Value Problems with Self-Adjoint Second-Order Differential Equation Using the Finite Element Method with Interpolation Hermite Polynomials// Lect. Notes Comp. Sci. 2014. Vol. 8660, P. 138154.

[3] Gusev A. A., Gerdt V. P., Chuluunbaatar O., et al. Symbolic-numerical algorithm for generating interpolation multivariate Hermite polynomials of high-accuracy finite element method. // Lect. Notes Comp. Sci. 2017. Vol. 10490, P. 134150.

[4] Gusev A. A., Vinitsky S. I., Chuluunbaatar O., et al. Finite Element Method for Solving the Collective Nuclear Model with Tetrahedral Symmetry, arXiv:1812.02665v1 [nucl-th] 2018.

[5] Lekien F., Shaddena S. C., Marsden J. E., Lagrangian coherent structures in n-dimensional systems// J. Math. Phys. 2007. Vol. 48, P. 065404.

Page 50: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

50

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

PySlise: a Python Package for solving Schrödinger Equations

Toon Baeyens1, Marnix Van Daele1 1 Dept. of Applied Mathematics, Comp. Science and Statistics,

Ghent University, Krijgslaan 281 Building S9 B9000 Gent, Gent, Belgium

This is an introduction to a new Python package that is able to solve numerically the one and two-dimensional time independent Schrödinger equation. Accompanying this package there is a web based GUI.

The main motivator of this research is modernizing and bringing together existing techniques and proven methods. Matslise is a very effective implementation of CP-methods for the one-dimensional Sturm-Liouville equation. But due to the numerous features, this implementation is not highly optimised for efficiency. For this reason we have reimplemented and optimised the algorithms of Matslise and Matscs in C++. This reimplementation became the computation engine for the Python package and the web based GUI (WebAssembly).

The Python package is less feature-rich than the original Matslise and Matscs (only Schrödinger equation, no degenerate case detection…), but a lot more optimised. These optimisations include: very efficient eigenfunction calculations, smarter backward propagation, higher order method for Matscs, on request error calculation, using C++ with Eigen. On top of that there is a unified interface to communicate with Matslise, Matscs and the new code for the two-dimensional case.

Page 51: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

51

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Modelling high precision spectroscopy experiments

Dimitar Bakalov1 1 Laboratory of mathematical modelling, Institute for Nuclear Research and Nuclear Energy,

Bulgarian Academy of Sciences, Tsarigradsko ch. 72, Sofia 1784, Bulgaria

Quite a few current experimental projects are targeted at determining improved accuracy values of fundamental characteristics of elementary particles and basic physical constants by means of high precision spectroscopy of atomic and molecular systems. These values are obtained by juxtaposing the experimental data to the results of advanced theoretical calculations of the spectra of the atomic systems. While indisputably the main computational challenge is the accurate evaluation of the bound state energy spectrum of the involved systems, the estimate of the systematic effects that give rise to substantial experimental uncertainties and the optimization of the experiments also set specific problems that required non-trivial mathematical and computational approaches.

A team from the Laboratory of mathematical modelling of INRNE-BAS has been working on modelling three such advanced experimental projects. The most interesting problems encountered are briefly described, and the obtained results are outlined.

1. Precision spectroscopy of antiprotonic helium has provided some of the most accurate data of the mass and magnetic moment of antiprotons; at present the efforts are targeted to determining the mass of negative pions by pionic helium spectroscopy [1]. The main systematic uncertainty of the experimental data is the pressure (density) shift and broadening of spectral lines due to interaction of the exotic atoms with surrounding helium atoms. Using advanced quantum chemistry methods we have evaluated the pairwise interaction energy for a wide range of the geometry and kinematic parameters and, by applying an appropriately generalized method of integration over the trajectories of the colliding species, calculated the density shifts for antiprotonic and pionic helium, thus reducing the experimental uncertainty by more than an order of magnitude.

2. The goal of the FAMU experiment, lead by INFN (Italy) and performed at the RIKEN-RAL muon facility (UK), is to measure the hyperfine splitting in the ground state of the muonic atom of hydrogen and extract out of it the electromagnetic radius of the proton [2]. The interest in the subject is related to the recently discovered incompatibility between the proton charge radii in ordinary and muonic experiment. The measurement is difficult because the hyperfine M1 transition is suppressed by orders of magnitude and is extremely weak. To achieve the needed enhancement we have developed a detailed multi-parametric model of the interactions of low-energy muons with matter and laser radiation, performed the needed optimization, and demonstrated that at optimum the sufficient efficiency of the experimental method is granted.

Page 52: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

52

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

3. The goal of the high-precision spectroscopy of hydrogen isotope molecular ions at the University of Düsseldorf is to provide top accuracy data on the masses of the proton and deuteron nuclei and the fine structure constant [3]. By analyzing the susceptibility of the spectrum to external fields, a number transitions have been shown to be particularly appropriate to serve as improved time standard and possibly be used in molecular clocks. The class of weak “forbidden transitions” in homonuclear molecular ions (i.e. transitions that are forbidden by non-relativistic selection rules) have been thoroughly investigated and demonstrated to be of spectroscopic interest.

The few examples considered above illustrate the significant potential of high precision spectroscopy of simple atomic and molecular systems as a tool for the determination of the fundamental characteristics of elementary particles.

References:

[1] B. Obreshkov, D. Bakalov. Phys. Rev. A93, 062505 (2016)

[2] E.Mocchiutti, A.Adamczak, D. Bakalov, et al. arXiv:1905.02049, to appear in Phys.Rev. A

[3] V.I. Korobov, P.Danev, D.Bakalov, S. Schiller. Phys. Rev. A97, 032505 (2018); S.Schiller, D.Bakalov, V.I.Korobov, Phys. Rev. Lett. 113, 023004 (2014).

Page 53: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

53

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Gibbs Phenomenon in Clenshaw-Curtis Quadrature

S. Adam1,2 and Gh. Adam1,2 1 Joint Institute for Nuclear Research (JINR), Laboratory of Information Technologies (LIT),

6 Joliot Curie, 141980 Dubna, Moscow region, Russian Federation 2 Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH),

30 Reactorului, 077125 Măgurele - Bucharest, Romania

The Gibbs phenomenon has a peculiar history. It is named after the American theoretical physicist J.W. Gibbs who described it in two notes published in 1898 and 1899 [1]. In an attempt to understand the persistent oscillations produced by the Albert Michelson’s harmonic analyzer near discontinuities of functions that it Fourier analyzed, Gibbs arrived at the conclusion that this is a pure mathematical property. The strange story is that the „Gibbs phenomenon” was observed and essentially explained half a century earlier in a theoretical investigation published by the English mathematician H. Wilbraham [2], who corrected a remark by Fourier on the convergence of the Fourier series.

Starting with 1906, when the general conditions for the existence of the Gibbs phenomenon were established [3], the Gibbs phenomenon was connected during the past century with the overshoot effect near function discontinuities in the Fourier series expansions of discontinuous functions. An insightful elementary derivation of this effect was reported by W.J. Thompson [4].

In fact, the Gibbs phenomenon is not restricted to the Fourier series. Recent investigations have evidenced its occurrence in spherical harmonics expansions, the Fourier-Bessel series, radial basis functions, many orthogonal expansions (see, e.g., [5]).

In the present report we investigate the occurrence of the Gibbs phenomenon in the truncated Chebyshev series entering the Clenshaw-Curtis (CC) quadrature as a building block of the automatic adaptive quadrature.

Going along the lines of a recent report [6], our main concern is how to benefit of the advantages of the CC quadrature evidenced by N. Trefethen [7] for intricate continuous functions to get accurate solution of Riemann integrals for discontinuous functions.

References

[1] J.W. Gibbs, "Fourier's Series", Nature 59, 200 (1898) doi:10.1038/059200b0; Nature 59, 606

(1899) doi:10.1038/059606a0.

[2] H. Wilbraham "On a certain periodic function", The Cambridge and Dublin Mathematical Journal, 3, 198–201(1848).

Page 54: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

54

SEPTEMBER 12-14, 2019

NUMERICAL METHODS FOR PHYSICS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

[3] M. Bôcher, "Introduction to the theory of Fourier's series", Annals of Mathematics, 7 (3), 81–152 (April 1906). The Gibbs phenomenon is discussed on pages 123–132; Gibbs's role is mentioned on page 129.

[4] W.J. Thompson, "Fourier series and the Gibbs phenomenon", American J. Phys., 60 (5), 425–429 (May 1992).

[5] B. Adcock, "Gibbs phenomenon and its removal for a class of orthogonal expansions", BIT, 51 (1), 7–41 (March 2011), DOI: 10.1007/s10543-010-0301-5 .

[6] Gh. Adam and S. Adam, "Local versus global decisions in Bayesian automatic adaptive quadrature", Lecture at MMCP 2019, Stara Lesna, July 1–5, 2019, Slovakia.

[7] L.N. Trefethen, "Is Gauss Quadrature Better than Clenshaw–Curtis?", SIAM Review, 50(1), 67-87, (2008), https://doi.org/10.1137/060659831

Page 55: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

55

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Evolution of quantum correlations in Gaussian bosonic channels

Aurelian Isar

Department of Theoretical Physics, National Institute of Physics and Nuclear Engineering Bucharest-Magurele, Romania

The quantum phenomena appearing when distant systems become correlated are interesting both fundamentally and operationally. There exist situations where such quantum correlations enable tasks that are impossible within the classical formalism. Quantum correlations are a powerful resource that underpins applications from quantum metrology to quantum computing.

Quantum information theory is founded upon the expectation that quantum resources like coherence, entanglement, discord, steering can be exploited for novel or enhanced ways of transmitting and processing information, such as quantum cryptography, teleportation, and quantum computing.

We describe the behaviour of continuous variable quantum correlations (entanglement, Gaussian quantum discord, Gaussian quantum steering) and quantum coherence in a system of two (coupled or uncoupled) bosonic modes evolving in a Gaussian noisy channel, in the case of a common environment of the form of a thermal bath or a squeezed thermal bath.

We solve the Markovian master equation for the time evolution of the considered system and study the quantum correlations and quantum coherence in terms of covariance matrices for Gaussian input states (squeezed vacuum state and squeezed thermal state).

Depending on the initial state of the system, the coefficients describing the interaction of the system with the reservoir and the intensity of the coupling between the two modes, we observe phenomena like generation, suppression, periodic revivals and suppressions, or an asymptotic decay in time of quantum correlations and relative entropy of coherence [1-4].

References

[1] A. Isar, Open Sys. Information Dyn. 23 (2016) 1650007

[2] A. Isar, T. Mihaescu, Eur. Phys. J. D 71 (2017) 144

[3] T. Mihaescu, A. Isar, Eur. Phys. J. D 72 (2018) 104

[4] A. Croitoru, A. Isar, in preparation.

Page 56: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

56

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The quantum dynamics of a two-qubit pair longitudinally coupled with a single-mode boson field

Elena Cecoi1*, Viorel Ciornea1, Aurelian Isar2, Mihai A. Macovei1 1 Institute of Applied Physics, Chișinău, Moldova

2 Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest - Magurele, Romania E-mail: *[email protected]

The quantum dynamics in a few-atom system attracted enormous attention over last several decades. This is because small qubit samples may form buildings blocks for even larger networks with huge potential applications for quantum technologies. Therefore the understanding of the few-qubit quantum dynamics as a whole may help towards those issues. Generally, a thorough description of the entanglement creation in a two-atom system was widely discussed until now [1]. In this context, artificial atomic systems, i.e. quantum dots or quantum wells as well superconducting qubits, have been widely investigated as well.

The quantum dynamics of a laser pumped two-level qubit pair longitudinally coupled with a phonon thermostat or, respectively, a single-mode boson field is reported here. We have demonstrated an efficient way to generate nonclassical two-qubit states, i.e. entangled subradiant states quantified via its concurrence [2], with the help of the phonon thermostat [3]. When the qubits are coupled with a single quantized leaking boson mode, we have demonstrated an efficient cooling mechanism well below the limits imposed by the thermal background, although the two-level qubits are resonantly driven. The relationship among the qubit pair entanglement and boson mode cooling effects are correspondingly established [4].

References

[1] Z. Ficek and S. Swain, Quantum Interference and Coherence: Theory and Experiments (Springer, Berlin, 2005).

[2] S. Hill, and W. K. Wootters, Entanglement of a pair of quantum bits, Phys. Rev. Lett. 78, 5022 (1997); W. K. Wootters, Entanglement of formation of an arbitrary state of two qubits, Phys. Rev. Lett. 80, 2245 (1998).

[3] E. Cecoi, V. Ciornea, A. Isar, M. A. Macovei, Entanglement of a laser-driven pair of two-level qubits via its phonon environment, JOSA B 35, 1127 (2018).

[4] E. Cecoi, V. Ciornea, A. Isar, M. A. Macovei, submitted.

Page 57: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

57

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Uhlmann Fidelity of Two Bosonic Modes in a Thermal Bath

Marina Cuzminschi1,2, Alexei Zubarev2,3, Aurelian Isar1,2 1 “Horia Hulubei” National Institute for Physics and Nuclear Engineering,

30 Reactorului, Bucharest-Măgurele, Romania 2 University of Bucharest, Faculty of Physics

3 National Institute for Laser, Plasma and Radiation Physics, Magurele, Romania

Distinguishability between a pair of quantum states is an important task in quantum computing. One of the well known ways to study the similarity between a pair of quantum states is Uhlmann fidelity [1]. With this measure one can calculate the Bures distance [2], which, in turn, can be used to determine entanglement, nonclassicality, and polarization.

Calculation of quantum fidelity for systems placed in a noisy environment is essential for implementation of quantum cryptography [3] and quantum teleportation [4] using Gaussian quantum channels. Particularly, calculation of quantum fidelity is important for implementation of copying and cloning of quantum states.

For pure states, the quantum fidelity is equivalent to the probability transition between the two states [1]. For the estimation of quantum fidelity of mixed states one can use the covariance matrix method [5].

The time evolution of fidelity was studied for Gaussian systems interacting with a thermal bath [6, 7, 8] and for Gaussian systems interacting with two distinct thermal reservoirs [9].

The main goal of this study [10] is to investigate the dynamics of a system consisting of two non-coupled bosonic modes interacting with a common thermal bath, in the framework of the theory of open systems based on completely positive quantum dynamical semigroups. We determine the quantum fidelity using a squeezed vacuum state and a squeezed thermal state as input states and we study the dependence of quantum fidelity on time, temperature of the thermal bath, squeezing parameter and average numbers of thermal photons.

We show that the fidelity decreases with the increase of temperature, as expected. Moreover, it decreases with the increase of squeezing parameter. Interestingly, for very low temperatures the value of fidelity increases with temperature for a system in an initial squeezed thermal state. It is also obtained the evolution of fidelity as a function of the frequency of one mode, while the other mode is kept fixed. The asymptotic value of fidelity is finite for the whole range of values of temperature and squeezing parameter.

References

[1] S. Olivares, Eur. Phys. J. Special Topics 203, 3 (2012).

[2] P. Marian, T. A. Marian, H. Scutaru, Phys. Rev. A 68, 062309 (2003).

[3] N. Gisin, G. Ribordy, W. Tittel, H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).

Page 58: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

58

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

[4] C. M. Caves, P. D. Drummond, Rev. Mod. Phys. 66, 481 (1994).

[5] P. Marian, T. A. Marian, Phys. Rev. A 86, 022340 (2012).

[6] A. Isar, Rom. Rep. Phys. 59, 473 (2007).

[7] A. Isar, Eur. Phys. J. Special Topics 160, 225 (2008).

[8] A. Isar, Phys. Part. Nuc. Lett. 6, 567 (2009).

[9] H. A. Boura, A. Isar, Y. A. Kourbolagh, Rom. Rep. Phys. 68, 19 (2016).

[10] M. Cuzminschi, A. Zubarev, A. Isar, accepted for publication in Proc. Rom. Acad. A 20 (2019).

Page 59: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

59

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Lasing and cooling effects of a quantum oscillator coupled with a three-level Λ – type system

Alexandra Mirzac1, Mihai A. Macovei1 1 Institute of Applied Physics, Academiei 5 str., MD-2028, Chișinău, MOLDOVA (Rep.),

e-mail: [email protected]

Quantum computing is constantly receiving huge public attention, due to the promising possibility to perform complicated ultra-fast computing tasks. Quantum processors are embedded in various physical system as quantum dots, trapped ions, superconducting circuits, which proved to possess fragile quantum states, thus requiring isolation from the external environment. A solution responding to these challenges comes from photonic systems featuring advantages for quantum information. First of all, the photon itself is information carrier, providing high-speed computing, manipulation of quantum states with linear optical elements without any electrical circuits. Here, we focused to find coherent sources of THz photons which may be used in quantum computational algorithms and circuits. Terahertz light is the key-tool for information supercurrents accelerations and a path for the design of emergent materials properties and coherent oscillation for engineering quantum computers [1].

Fig. 1. The model’s scheme: A laser pumped three-level Λ −type system the upper state of which, , is coupled

with a quantum oscillator mode of frequency . Here and are the corresponding laser-qubit coupling

strengths, i.e. the Rabi frequencies, whereas ′s are the respective spontaneous decay rates.

In this context, we shall report our recent investigations of lasing and cooling effects in a quantum oscillator coupled with the most upper state of a three-level Λ – type system. The three-level emitter, exhibiting two transitions and possessing orthogonal dipole moments, is coherently pumped with a single or two electromagnetic field sources, see Fig. 1. We have computed the ranges for flexible lasing and cooling phenomena based on the quantum oscillator’s degree of freedom. The asymmetrical decay rates and quantum interference lead to the population transfer within relevant dressed states of the emitter’s subsystem coupled with the quantum

Page 60: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

60

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

oscillator. A nanomenchanical resonator coupled with the most excited state of three-level emitter fixed on it is considered as an appropriate system. From the other side, if the upper state of a Λ – type system possesses a permanent dipole, it will couple with a cavity electromagnetic field mode in terahertz frequency range. In the latter case, an effective electromagnetic field source of photons in terahertz frequency ranges is obtained [2].

In corespondence with the dressed – state picture of three – level system, we have identified and modeled numerically two resonance conditions picturing the quantum oscillator’s dynamics, when the quantum oscillator’s frequency is close to the doubled generalized Rabi frequency or just to the generalized Rabi frequency, respectively. Correspondingly, these two situations were computed separately. We have found steady-state lasing or cooling regimes in both situations for the quantum oscillator’s field mode, however, for asymmetrical spontaneous decay rates corresponding to each three-level qubit’s transition. As well, we performed a careful analysis of the mechanisms responsible these effects, which are completely different for the two situations [2].

References:

[1] X. Yang, C. Vaswani, C. Sundahl, M. Mootz, L. Luo, J. H. Kang, I. E. Perakis, C. B. Eom, J. Wang. Lightwave-driven gapless superconductivity and forbidden quantum beats by terahertz symmetry breaking. Nature Photonics, 2019. DOI: 10.1038/s41566-019-0470-y

[2] A. Mirzac, M. A. Macovei, Dynamics of a quantum oscillator coupled with a three-level Lambda-type emitter, arXiv:1810.09264v2.

Page 61: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

61

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Fidelity of teleportation for two mode Gaussian resource states in a thermal bath

Alexei Zubarev1,3, Marina Cuzminschi2,3, Aurelian Isar2,3 1 National Institute for Laser, Plasma and Radiation Physics, Magurele, Romania

2 “Horia Hulubei” National Institute for Physics and Nuclear Engineering, 30 Reactorului, Bucharest-Măgurele, Romania

3 University of Bucharest, Faculty of Physics

One of the most important applications of quantum mechanics and quantum information is quantum teleportation. Quantum teleportation [1] represents the instantaneous transfer of the quantum state of one quantum system to another one, across a distance, without any physical contact. For teleportation to occur both parties must share an entangled state. The most common resource for quantum teleportation is the coherent light, which allows implementation of teleportation protocols up to 100 km. To evaluate the success rate of the teleportation events one uses the fidelity of teleportation.

In case of classical systems the maximal value of fidelity of teleportation is 1/2 [2]. There are no restrictions for the maximal value of fidelity of teleportation in case of quantum systems. This allows a secure quantum teleportation, which can be implemented if the fidelity of teleportation reaches the value 2/3 [3].

We have to mention that for a successful implementation of a teleportation protocol the resource states should be entangled when the teleportation starts. Experimentally it was achieved a fidelity of teleportation up to 0.83 [4].

To estimate the fidelity of teleportation we use the covariance matrix method [5, 6] based on a semigroup dynamics of open quantum systems.

The goal of our study [7] is the search for an optimal value of the fidelity of teleportation, when a coherent state is teleported, and using as a teleportation resource the state of a system of two coupled bosonic modes interacting with a thermal bath. We also investigate the logarithmic negativity, as a measure of entanglement. We study its evolution as a function of time, environment temperature, squeezing parameter, frequencies of the modes, average number of thermal photons and their coupling.

From the simulations it results a strong correlation between estimated fidelity of teleportation and the environment temperature, squeezing parameter and average photon number. The same correlations were put in evidence for the logarithmic negativity. The computed quantities depend also on the mode frequencies and strength of the coupling between the two modes.

An important result of our work is the identification of the optimal mode frequencies, which allow a successful teleportation (fidelity of teleportation larger than 1/2 and even 2/3) for

Page 62: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

62

SEPTEMBER 12-14, 2019

QUANTUM CORRELATIONS RELEVANT FOR QUANTUM COMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

large times. Even more, for large times the asymptotic quantum resource state can remain entangled and the fidelity of teleportation has nonzero values.

References

[1] C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).

[2] N. Takei, H. Yonezawa, T. Aoki, A. Furusawa, Phys. Rev. Lett. 94, 220502 (2005).

[3] A. Furusawa, N. Takei, Phys. Rep. 443, 97 (2007).

[4] M. Yukawa, H. Benichi, A. Furusawa, Phys. Rev. A 77, 022314 (2008).

[5] G. He, J. Zhang, J. Zhu, G. Zeng, Phys. Rev. A 84, 034305 (2011).

[6] S. Olivares, M. G. A. Paris, U. L. Andersen, Phys. Rev. A 73, 062330 (2006).

[7] A. Zubarev, M. Cuzminschi, A. Isar, accepted for publication in Rom. J. Phys. 74 (2019).

Page 63: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

63

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Data Management services in highly distributed environments: recent achievements of the eXtreme-DataCloud project

Daniele Cesini1 1 INFN-CNAF

Bologna, Italy

The eXtreme DataCloud (XDC) project is aimed at developing data management services capable to cope with very large data resources allowing the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments. Started in November 2017, XDC is combining the expertise of 8 large European research organisations, the project aims at developing scalable technologies for federating storage resources and managing data in highly distributed computing environments.

The project is use case driven with a multidisciplinary approach, addressing requirements from research communities belonging to a wide range of scientific domains: Life Science, Biodiversity, Clinical Research, Astrophysics, High Energy Physics and Photon Science, that represent an indicator in terms of data management needs in Europe and worldwide.

The use cases proposed by the different user communities are addressed integrating different data management services ready to manage an increasing volume of data. Different scalability and performance tests have been defined to show that the XDC services can be harmonized in different contexts and complex frameworks like the European Open Science Cloud.

The use cases have been used to measure the success of the project and to prove that the developments fulfill the defined needs and satisfy the final users.

The present contribution describes the results carried out from the adoption of the XDC solutions and provides a complete overview of the project achievements.

Page 64: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

64

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Monitoring of exascale data processing

Dana Petcu1,2, Gabriel Iuhasz2 1 West University of Timis, oara, Romania 2 Institute e-Austria Timis, oara, Romania

System monitoring for large-scale HPC platforms is a challenging task, which becomes more difficult as the scale and complexity of the infrastructure increases. In such case, the system mean time between failures tends to decrease inversely proportional to the number of components. Therefore, the applications can experience interruptions in service due to hardware failures. As the HPC systems’ size increases, application failures become a critical issue, which could have profound effect on the overall system performance. In addition to hardware failures, novel system software stacks coupled with legacy parallel scientific applications deployed on modern cluster platforms push the envelope of reliability. Thus, it is predicted that the failure rates associated with the strawman architectures presented in the DARPA ExaScale Computing Study could be as low as 35-39 minutes at Exascale [1,4].

One important challenge in ExaScale computing consists of developing scalable components that are able to monitor in a coordinated and efficient manner the use of the hardware resources and the behaviour of the applications. However, monitoring Exascale system is a challenge due to the large number of components and the tight requirements, such as sub-optimal period scheduling [3]. Moreover, in HPC systems the schedulers assign compute nodes in a static way, trying to run different applications in different nodes to avoid interferences. They use CPU availability as main criteria on the scheduler side and HPC system statistics, as TACC Stats in [2], to improve the system utilisation. However, as nodes scale-in, a single failure could affect multiple applications in the system simultaneously. Thus, efficient system monitoring plays an important role in three aspects: 1) tuning the computing infrastructure; 2) optimising scheduling in order to share resources and to provide high efficiency; and 3) detect faulty components/nodes in the system.

Essential problem in extreme scale systems is their scalability due to large numbers of resource and huge data amounts to be transferred, stored and accessed. That imposes a challenge on monitoring system to provide insightful data allowing to avoid or remove bottlenecks related to data congestion, and in this way, improve performance and efficiency of large scale applications. On the other hand, the scale and complexity of such systems put important constraints on the monitoring system itself.

We propose a distributed event processing platform that is capable of analyzing the incoming monitoring data. This analysis allows end users to have an up to date cross-section of the exascale applications internal state and performance. One important distinction of large scale systems is that there is a substantial quantity of data available. However, this data is unusable by many of the available analysis methods. This is due in large to the fact that semantically labeled data is very hard to obtain or create, requiring a lot of effort which in many cases is not

Page 65: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

65

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

transferable from one large scale system to another. In this work we aim to present a comprehensive overview of the different methods used that enable our event detection system to tackle this type of analysis. We describe our methodology for identifying events and anomalies as well as methods of utilizing unsupervised methods in conjunction with continual user feedback.

Acknowledge. This work has received funding from the EC-funded H2020 ASPIDE project (Agreement 801091: Exascale programIng models for extreme data processing). This work was supported with hardware resources by the Romanian grant BID (PN-III-P1-PFE-28: Big Data Science).

References

[1] ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems, 2008.

[2] T. Evans, W. L. Barth, J. C. Browne, R. L. DeLeon, T. R. Furlani, S. M. Gallo, M. D. Jones, and A. K. Patra. Comprehensive resource use monitoring for hpc systems with tacc stats. In 2014 First International Workshop on HPC User Support Tools, pages 13–21, Nov 2014.

[3] W. M. Jones, J. T. Daly, and N. DeBardeleben. Application monitoring and checkpo- inting in HPC: Looking towards exascale systems. In Proceedings of the 50th Annual Southeast Regional Conference, ACM-SE ’12, pages 262–267. ACM, 2012.

[4] Vivek Sarkar et al. ExaScale Computing Software Study: Software Challenges in Extreme Scale Systems, September 2009.

Page 66: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

66

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Jupyter Notebooks as scientific gateways to access cloud computing and distributed storage

Fernando Aguilar1 1 Institute of Physics of Cantabria (IFCA-CSIC)

Santander, Spain

In many diverse scientific disciplines, there has always been the need for smart environments where different software components were accessible as well as data management functionalities (access, analysis, visualization). However, during the last years, the access to “Big Data” sources has been extended and desktop resources are not yet valid to address the problems derived from data in terms of volume, variety, velocity or veracity. For this reason, new scientific gateways are being used to access to complex computing resources in a cloud-based fashion, such as Jupyter Notebooks [1], which support different types of kernels in diverse programming languages for different purposes. The catalogue of available Jupyter Notebooks is increasing and there are different examples in many scientific domains [2,3].

Jupyter supports different deployment methods, and the Jupyter Hub version set up a flexible, customizable and scalable environment to manage different users. Thanks to technologies like Docker, users access via JupyterHub to his/her own computer environment with specific software and Jupyter notebooks installed. However, the available resources are limited by the hardware, but it can be solved integrating Jupyter with different cloud-based systems, like storage or workload managers.

Authentication and Authorization Infrastructures and standards like OpenID Connect (OIDC), supported by Jupyter, can be exploited in order to integrate those systems in a transparent way. This work describes how the OIDC standard is used within Jupyter Hub environment to provide, thanks to cloud computing services, a complete system to access to distributed storage (Onedata [4]) as well as workload managers (PaaS Orchestrator [5]) to run jobs with specific computing requirements. This complete solution, adaptable to many purposes, facilitates the data access in a FAIR [6] way, not only via Jupyter notebooks but also in computing worker nodes and desktops. Besides, it enables data processing with specific hardware requirements via job submission, being the output directly accessible from the notebook.

References

[1] Perez, F., & Granger, B. E. (2007). IPython: A System for Interactive Scientific Computing. Computing in Science & Engineering, 9(3), 21–29. https://doi.org/10.1109/MCSE.2007.53

[2] White, J. T., Fienen, M. N., & Doherty, J. E. (2016). A python framework for environmental model uncertainty analysis. Environmental Modelling & Software, 85, 217–228. https://doi.org/10.1016/J.ENVSOFT.2016.08.017

Page 67: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

67

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

[3] Fernandez, N. F., Gundersen, G. W., Rahman, A., Grimes, M. L., Rikova, K., Hornbeck, P., & Ma’ayan, A. (2017). Clustergrammer, a web-based heatmap visualization and analysis tool for high-dimensional biological data. Scientific Data, 4, 170151. https://doi.org/10.1038/sdata.2017.151

[4] Dutka, Ł., Wrzeszcz, M., Lichoń, T., Słota, R., Zemek, K., Trzepla, K., … Kitowski, J. (2015). Onedata – A Step Forward towards Globalization of Data Access for Computing Infrastructures. Procedia Computer Science, 51, 2843–2847. https://doi.org/10.1016/j.procs.2015.05.445

[5] Caballer, M., Zala, S., García, Á. L., Moltó, G., Fernández, P. O., & Velten, M. (2018). Orchestrating Complex Application Architectures in Heterogeneous Clouds. Journal of Grid Computing, 16(1), 3–18. https://doi.org/10.1007/s10723-017-9418-y

[6] Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3, 160018. https://doi.org/10.1038/sdata.2016.18

Page 68: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

68

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Challenges in Mathematical Modeling and Computational Physics in LIT-JINR on 2020–2023

Gh. Adam1,2, J. Buša1,3, O. Chuluunbaatar1, and P. Zrelov1 1 Joint Institute for Nuclear Research (JINR), Laboratory of Information Technologies (LIT),

6 Joliot Curie, 141980 Dubna, Moscow region, Russian Federation 2 Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH),

30 Reactorului, 077125 Măgurele - Bucharest, Romania 3 Technical University of Košice, Faculty of Electrical Engineering and Informatics,

Nemcovej 32, 04001, Košice, Slovakia

• Basic features of the research foreseen to be done during 2020–2023 are the work done in cooperation (on 40 projects under development in the JINR) and the multidisciplinarity of the research. These assume using the existing expertise in the Laboratory of Information Technologies (LIT) for the solution of challenging problems which ask for elaborated mathematical modeling, development of algorithms and their implementation into packages to be run on the top computing facilities acquired, developed, customized, and maintained in the LIT within the Multifunctional Information and Computing Complex (MICC). To this aim, a deep and extensive professional expertise is needed along four critical directions: thinking as computer scientist with several computing paradigms; expert knowledge of the mathematical problems backing the topic of the collaboration; in depth grasp of the numerical analysis topics enabling both reduced complexity and full reliability of the developed algorithms; deep knowledge of the physics side of the problem at hand.

• Unparalleled possibilities arised for the solution of computing intensive tasks through the new heterogeneous computing platform consisting of the already existent learning and testing HybriLIT cluster and the new “GOVORUN” supercomputer. This asks for long lasting efforts to grasp the potentialities offered by the “GOVORUN” supercomputer through the three main components of its heterogeneous structure: high performance computing (HPC) with distributed memory (CPU Intel Gold), HPC with shared memory (CPU Intel Phi), HPC and, especially, ML/DL development of neural network methods (GPU NVIDIA accelerators).

• Within the next four years, two events of the utmost importance for JINR are planned: data accumulation during the future Run 3 of the LHC megaprojects done with active JINR participation (ATLAS, ALICE, CMS) and the start of the NICA accelerator and the MPD/NICA project in the JINR. In both cases, there will be huge pressure on the costs coming from the need of resources for offline data storage, processing and analysis. The breakthroughs search in these large scale projects through Big Data analytics will involve conceptual developments and step-by-step implementations within the Big Data approach of a scalable software-analytical platform for the collection, storage, processing, analysis, retrieval of relevant information and visualization

Page 69: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

69

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

of results for the MPD, SPD and BM@N experiments at the NICA accelerator and for experiments within the JINR neutrino program.

The hopes connected with the quantum computing come from the perspective to get simultaneously computed solutions to as many tasks as fitting the quantum multiplicity of the quantum computer. The orders of magnitude accelerations obtained in this way is a huge incentive for this very risky line of research.

• The 3D modeling and simulation of the magnetic fields inside different installations is a star activity of the LIT experts. This concerns the NICA (JINR) and the SIS 100 (GSI) accelerators; future superconducting setups for proton therapy; nonstandard problems of magnetostatics.

The implementation of new features in Geant 4 mega package was done during the last decade with substantial contributions of JINR. For the future, better parametrization of some energy ranges and processes as well as the addition of parallelization features are foreseen.

The contribution of the LIT scientists to the development of better algorithms for online and/or offline data processing will concern the NICA related setups (BM@N, MPD, and, in the future, SPD); IBR-2M setups in FLNP: YuMO, HRFD; the step by step implementation of the alert system for the BAIKAL GVD; the CMS (LHC) and the CBM (FAIR) experiments.

• Computational mathematics and theoretical computational physics investigations are closely connected to the numerical or symbolic–numerical solution of hard mathematical problems through field–theoretic and molecular dynamics models taking into account the main features of the physical processes and mathematical models: non-linearity, multi-parametric behavior, the existence of critical modes and phase transitions. An intense line of research (done in collaboration with BLTP) is the development of effective QCD-motivated models for describing properties of nuclear matter at NICA energies. Molecular dynamics investigations will be aimed at explaining long-range effects and description of structural changes of materials under nanocluster irradiation, finding threshold values of energy loss in irradiated materials, leading to structural changes and through tracks in thin targets.

The numerical solution of polaron related phenomena in solid systems or in liquid solutions, the accurate description of nuclear reactions with exotic nuclei are challenges waiting for the development of improved algorithms allowing the extension and use of already developed parallel packages.

Special mention is to be made of the theoretical clarifications concerning development of models allowing reliable treatment of data obtained under incomplete observations or reduced statistics, a topic of the highest interest for the creation of superheavy elements in FLNR.

Computer algebra methods will serve to simulate quantum systems and quantum information processes; to solve special topics; to symbolic-numerical solution of differential equations arising in the simulation of quantum computing and other physical processes.

Page 70: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

70

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Analyzing Data generated by the High Power Laser System in ELI-NP

Georgios Kolliopoulos1, Bertrand de Boisdeffre1 1 ELI-NP / IFIN-HH

Extreme Light Infrastructure – Nuclear Physics (ELI-NP), in Magurele - Romania, is expected to become soon the most advanced research facility in the world focusing on the study of photo-nuclear physics and its applications. The very High Power Laser System (HPLS) consisted of two 10 PW ultra-short pulsed lasers will enable ELI-NP to tackle a wide range of research topics in fundamental physics, nuclear physics and astrophysics, and also applied research in materials-science, management of nuclear materials and life sciences.

After the end of the implementation phase by THALES company, the operational phase will start with its own challenges related to the operation, maintenance and the future upgrades of the HPLS. In order to succeed in this goal, a thorough knowledge of the system is needed. Useful information can be retrieved from the data acquired by the HPLS control and monitoring system. The data recorded every day are of a very big size and need to be treated accordingly. The information which can be extracted will be valuable for the day-to-day monitoring of the overall performance of the HPLS, as well as for the efficient maintenance and the intelligent improvement of this enormous facility.

The HPLS comprises hundreds of devices integrated in a SCADA type control system. During the operation hours, for each one of these devices is being continuously recorded information regarding its condition and the measurement that it is possibly performing. The frequency of acquisition follows the repetition rate of the Beam-Line on which a certain device is mounted; for a 100TW-Line, with a repetition rate of 10Hz, the data acquisition for each device occurs every 100 milliseconds. This ends-up to a number of acquisitions, per device, close to half a million (500,000) for just one day of full operation. These data are saved in HDF5 (Hierarchical Data Format - 5) files, the number of which for a single day can exceed one hundred thousand (100,000). The corresponding total size of Data can reach the 600 GB in a day with both the 10 PW ultra-short pulsed lasers in full operation (300 GB per arm). At the end of the working day, all the HDF5 files are compressed and stored in a Network Attached Storage (NAS) server; after compression, the total size is reduced by almost one order of magnitude.

In this work we present our method to parse and process selected information from compressed HDF5 files containing data for a period of several months. New insights have been obtained in regards to the day-by-day activity on the HPLS all along this period. In addition, based on findings during the preliminary analysis, useful observations have been shared with the implementation team for the optimization of the system data acquisition.

Python has been the programming language used for this project so far. Numerous special Python libraries have been utilized, loaded as dependencies on the Interpreter. Among many libraries, we want to mention three. The “multiprocessing” library has been used in order to

Page 71: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

71

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

distribute the parsing part of the program into multiple processes. The use of twelve (12) processes in parallel instead of just one (1), has been shown to be able to reduce the total execution time by four (4) times; something very important if one takes into account that, in the future, the complete processing of the acquired data of just one day of operation, is expected to last for several hours overnight. In addition, “Pandas” and “NumPy” libraries have been used to reshape, filter and analyze the selected information. Comprehensive analysis of various topics of interest is now possible.

Figure 1

As an example, in the Figure 1 can be seen a visual representation of information initially distributed in hundreds of thousands of HDF5 files, corresponding to Data acquired during several months. The subsystem “Set Z”, which comprises eight (8) devices, had been operated for three (3) distinct periods of several days each. We wanted to know for how long, in respect to the total time of activity for every period, each one of the devices had been in a “Fault” state (meaning that something was wrong with the device itself). One can observe that during the period 2, in comparison to the period 1, there is a significant increase of problems reported by the devices D_Z1, D_Z2 and D_Z4. On the other hand, before the beginning of period 3, seems that almost all these issues have been finally fixed. The extraction of information of this kind is very important, as it allows to be traced the long-term performance of the different subsystems of the HPLS and the efficiency of our work on them.

This is an ongoing promising project which has already offered valuable information. We believe that, in the future, the complete analysis of the day-by-day HPLS data will be a powerful tool in our Group’s continuous effort to deliver everyday a laser beam of high quality.

Page 72: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

72

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Cluster monitoring system of the Multifunctional Information and Computing Complex (MICC) LIT

I. Kashunin, A. Dolbilov, A. Golunov, V. Korenkov, V. Mitsyn, and T. Strizh

Laboratory of Information Technologies, Joint Institute for Nuclear Research, 6 Joliot-Curie St, Dubna, Moscow Region, Russia 141980

1. The monitoring system of the MICC Tier-1 and Tier-2 in the Laboratory of Information Technologies (LIT, JINR) was put into operation in early 2015. The steady MICC development entailed corresponding increases of the numbers of devices and of measured metrics. As a consequence, the performance of the monitoring system server got insufficient. The solution to this problem was the construction of a cluster monitoring system. It allowed distributing the load from one server to several ones, thus significantly increasing the level of scalability.

2. Prerequisites for changing the approach: the increased load on the central processor (CPU) of the server of the monitoring system has become acute toward the end of 2017, when the server reached the load of over 80%, i.e. 22 cores out of 24 (Fig. 1). It became obvious that, with the linear growth of the monitoring system load, various failures were possible.

Fig. 1. Graph of the load of the monitoring system server CPU

The idea to solve this problem by distributing on several servers the jobs performed by the existing monitoring system server was not possible under the Nagios software package. Migration to the Icinga2 software package allowed one to distribute the load

between several nodes, to preserve the usual visualization system and to save the statistics of the accumulated data.

3. The migration from Nagios to Icinga2 was made possible by a step by step approach which secured minimization of the idle time as well as the loss of data. Since Icinga2 and Nagios have different syntax of the configuration files, it was necessary to write migration scripts. A test of the Icinga2 work with various users asked Nagios elements was also needed. The smooth migration needed the solution of the tasks sketched below (Fig. 2).

2015 2016 2017

Page 73: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

73

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Fig. 2. Main tasks solved within Nagios to Icinga2 migration

4. The Icinga 2 distributed cluster was configured in two steps. Initially, the cluster monitoring system was deployed on the test server, where a special approach, allowing reducing the CPU load, was formed. During operation it turned out that the launch of data acquisition scripts performed the main load. By entrusting this task to

special distribution nodes, the load on the main server significantly decreased. The general scheme of the load distribution is shown in Fig. 3.

Fig. 3. General scheme of the load distribution

The report will describe in detail the implementation in LIT JINR of the described scheme with emphasis on its advantages. A performance comparison of the previous and the present monitoring system will be done.

Page 74: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

74

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

A Research Management System based on Rdf Graphs

Loredana Mocean1, Miranda-Petronella Vlad2 1 Babes-Bolyai University, Department of Business Information Systems

2 Dimitrie Cantemir Christian University, Accounting and Business Administration Department

Introduction. In recent years, higher education institutions are becoming more focused on Knowledge Management (KM) and Semantic Web based on RDF [https://www.w3.org/RDF/ (2019)]. Together with traditional, structured data, nowadays exist semi-structured and unstructured data and most universities struggle to collect, organize and store their own databases. Our study proposes a new approach to design and implement a Research Management System (RMS) using state of the art of Linked Data technologies such as RDF graphs. The aim of our research is to transform an existing relational database from a university ( implemented in RMS) and to rewrite the SQL commands for querying the data in a new, graph database-driven system. We choose not to rewrite the database in a graph database environment, but to convert the existing database.

The proposal. The new database will be a D2RQ database connected to RDF from Springer (Springer LOD. The D2RQ academic graph contains links to Springer (statements as ubb:Mocean owl:sameAs Springer:Mocean, where Mocean is the ID that Springer offers us when we publish something). The benefits are that people no longer have to declare their work in RMS, they will list their papers directly from Springer using SPARQL, based on the equivalence links from their internal ID from university and external IDs from Springer LOD.

Design decisions. The implemented RMS is used as a persistent storage mechanism that enables to store data and optionally implement functionality. The relational database is powerful because it requires few assumptions about how data is related or how it will be extracted from the database (https://infocercetare.ubbcluj.ro). The modern RDBMS technology will convert the existing relational database (in MySQL) in an RDF graph and connect to the Springer Database within Springer LOD. The relational database will add NoSQL features and the graph resulted

after conversation will use SQL/ NoSQL engine to run queries (Figure 1).

Figure 1. The transformation from a relational database to RDF repository

The collection of tables can be seen in the relational database. We distinguish the database scheme (logical design) and the database instance. A relation scheme, a list of attributes and their corresponding domains are also represented. The MySQL Workbench 6.3 is used for the migration of the SQL database. This application provides a visual console to easily administer MySQL environments and gain better visibility into databases (http://www.mysql.com/products/workbench/). Developers can quickly and easily convert

Page 75: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

75

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

existing applications to run on MySQL both on Windows and other platforms. Migration also supports migrating from earlier versions of MySQL to the latest releases. The D2RQ platform helps us to access relational database as RDF graph. The platfom offers RDF access to the content of relational databases without having to replicate it into an RDF store (http://d2rq.org/). The D2RQ server was installed, and the database is accessed with the help of the commands: generate-

mapping -u root -p "0000" -o grafuri.ttl "jdbc:mysql:///research management", d2r-server grafuri.ttl. The transformation framework involves the actual virtual, normalized, relational schema for the input database. Details of the algorithm pertaining to the RDF transformation of data are shown in figure 2.

Figure 2. The alghorithm of conversion from relational to RDF.

Conclusions. The new RMS based on the RDF is designed for easily indexing and retrieval of textual documents, from large sets of documents efficiently and using intelligent queries. All the concepts are based on the schema level, while the theoretical framework is assigned by description logics which can be expressed with terms from OWL [1], [2] and RDF Schema. The architecture, design of relational

database and design of graph aim to contribute to transformation from relational to graph database.The Semantic Web-Based architecture model and the components proposed for implementation contain a hierarchical contents structure and semantic relationships between concepts. The model can provide related useful information for searching and quering.

References

[1] Fayed, G., Sameh, D., Ahmad, H., Jihad, M. et al. (2006). “E-Learning Model Based On Semantic Web Technology”, International Journal of Computing & Information Sciences

[2] OWL (2016), https://www.w3.org/2001/sw/wiki/OWL

Page 76: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

76

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Development of cloud computing, HTC, and HPC services at NGI-RO

Ionut Vasile1, Dragos Ciobanu-Zabet1, and Mihnea Dulea1 1 Department of Computational Physics and Information Technology (DFCTI)

Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH) Magurele, Ilfov, Romania

The Operations Centre (NGI-RO) of the Romanian Grid Infrastructure supports the resource centres of the national HTC production infrastructure in activities such as site certification, ticket management, security incidents handling, and providing liaison with the European infrastructure for advanced computing (EGI). Besides this, it hosts and manages its own compute and storage resources for offering cloud, HTC and HPC services to the national and international research communities [1].

This communication reports on the recent developments of the infrastructure and of services provision at NGI-RO, in the framework of the local contribution to the implementation of the European Open Science Cloud (EOSC).

Due to the complex structure of the center, which was inherited from the successive, uncorrelated upgrades for the diversification of services offered to users, a first priority was to simplify the hardware architecture and to standardize the operating systems on the servers. The two CREAM-CEs, that managed the HTC and the HPC/MPI jobs on the GRIDIFIN site were decomissioned and replaced by ARC-CE, and the HPC nodes were migrated to CentOS7. The ARC-CE is virtualized with MPI, and its SLURM resource management system supports a queue dedicated to the eli-np.eu virtual organization (VO). Another queue handles the jobs launched by local users.

A SLURM client node with attached NVIDIA Tesla GPU accelerator was added for the support of intensive data processing. nvidia-docker was installed in order to launch Docker containers with NVIDIA GPU support.

On the CLOUDIFIN site, the OpenStack cloud platform was upgraded from the Pike to Rocky versions and the EGI-specific services have been adapted to the new version. Also, a compute node with attached NVIDIA Tesla GPU accelerator was added.

The EGI Check-in service (AAI proxy) was integrated, such that the users now have access to EGI resources and services by making use of the federated authentication service, which connects EGI’s service providers to federated identity providers that are external to EGI (such as home organizations or social identity providers). The installation of the Keystone-VOMS authorization plugin on site allows also the authentication of the users through voms-proxy.

CLOUDIFIN currently supports the following VOs: eli-np.eu, ronbio.ro, gridifin.ro, fedcloud.egi.eu, biomed, and the recently added benchmark.terradue.com.

Page 77: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

77

SEPTEMBER 12-14, 2019

DATA MANAGEMENT, PROCESSING, AND MONITORING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

CernVM File System Stratum 0 service was installed on two virtual machines to be used for software distribution by CLOUDIFIN and by the DFCTI’s HPC site. This will be configured, using VOMS, such that different permissions could be defined for specific software packages, using groups within the same VO.

Recently, NGI-RO joined, as a cloud resource provider, the new EGI partnership with ESA regarding the project „Copernicus Space Component Worldwide Sentinels Data Access Benchmark”, that sets up a benchmarking service for all Sentinel Data Hubs and Data and Information Access Services (DIAS) managed by ESA.

For this, the benchmark.terradue.com VO was configured locally, by performing the following steps: installation and configuration of a bare-metal server; configuring the cloudkeeper service – required for the download of the OpenStack Virtual Appliace images for the cloud VM; configuring of the cloud networking services with public IP for external access to VM.

For the support of simulations performed by the nanophysics research community, a new VM image for gridifin.ro VO was programmed and published in AppDB [2]. This Virtual Appliance has been created using a minimal CentOS 7.6 64bit installation with cloud-init contextualization, and contains the following software: FANN (Fast Artificial Neural Network library), TensorFlow, and SIESTA. Besides this, the previously developed Virtual Appliance that provides the EPOCH software package for the eli-np.eu VO has been updated [3].

Acknowledgements: This work was partly funded by the Ministry of Research and Innovation under the contracts PN 19 06 02 05 (program NUCLEU), 71 (Romanian-JINR cooperation project, Order 397/27.05.2019), and project H2020 EOSC-hub.

References

[1] I. Vasile, D. Ciobanu-Zabet, M. Dulea, Advanced computing support for non-HEP research communities at IFIN-HH, in Proceedings of RO-LCG 2018 “Grid, Cloud, and High-Performance Computing in Science”, Cluj-Napoca, 17-19 October 2018, IEEE Xplore Digital Library.

[2] https://appdb.egi.eu/store/vappliance/centos7.0.nano

[3] https://appdb.egi.eu/store/vappliance/image.for.centos.6.8.x86.64

Page 78: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

78

SEPTEMBER 12-14, 2019

MOLECULAR BIOLOGY AND BIOCOMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Application of the FMO method to investigation of antimicrobial peptides interaction with membrane models

George Necula1, Mihaela Bacalum2, Lorant Janosi3, Mihai Radu2 1 Department of Computational Physics and Information Technologies,

Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering, Magurele, Romania 2 Department of Life and Environmental Sciences,

Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering, Magurele, Romania 3 Department of Molecular and Biomolecular Physics,

National Research and Development Institute for Isotopic and Molecular Technologies, Cluj-Napoca, Romania

The ever-increasing phenomenon of multiple drug resistance (MDR) has lead researchers to look for alternatives to traditional antibiotics. One of the more promising alternatives are the antibiotic peptides (AMP). AMPs are part of the innate immune system, and are typically cationic and amphiphilic oligopeptides that have the ability to disrupt the membrane of a wide range of microbial targets which confers them anti-bacterial, anti-fungal and anti-viral activity, while avoiding MDR.

In order to facilitate the in-silico investigation of AMPs interaction with different membrane models, we have integrated in RoNBio molecular modelling system various workflows that automates the building the atomistic AMP-membrane systems, performs molecular dynamics simulations, and analysis of MD trajectories: hydrogen bond detection, area per lipid (APL), AMPs insertion in the membrane, and AMP MM binding energy calculation. To the existing workflows we have added a workflow that automates the execution of the fragment molecular orbital (FMO) method in order to obtain a detailed and accurate list of AMP interactions with the phospholipids of the membrane models. As the name suggest, the principle of FMO is the fragmentation of large molecular complexes (e.g. receptor and ligand) and perform QM calculations on individual fragments. The considerable speed-up of FMO is derived from the highly efficient parallelization of QM calculations using GDDI.

The main objective of the paper was to validate the FMO method for the investigation of antimicrobial peptides interaction with membrane models as an alternative to the more laborious, difficult cu setup and computationally intensive free binding energy methods like FEP, TI, or PMF.

For the validation of FMO workflow and method we used a ARG-TRP-rich peptide in DOPC:DOPG (85:15 molar ratio) and pure DOPC bilayer complexes from representative frames (based on proximity to the membrane surface) extracted from two molecular dynamics simulations. FMO derived PIE (Pair Interaction Energy) were correlated with MM electrostatic

binding energy and experimental G. FMO calculations were performed with GAMESS (General Atomic and Molecular Electronic Structure System) and energy decomposition analysis was performed at the spin component scaled second-order Møller-Plesset (SCS-MP2) level with 6-31G** basis set, and PCM was added in order to describe solute-solvent interactions. The PIE

Page 79: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

79

SEPTEMBER 12-14, 2019

MOLECULAR BIOLOGY AND BIOCOMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

consists of five energy terms: electrostatic (Ees) and charge transfer (Ect) that are indicative of

hydrogen bonds, salt bridges and polar interactions; dispersion (Edi) that are hydrophobic in

nature; exchange repulsion (Eex) that describes the steric repulsion of the electrons; and finally

solvation energy (Gsol).

Although FMO and MM binding energy correlated reasonably well R2 = 0.88 for DOPC+DOPG and R2 = 0.79 for DOPC pure system, only the binding energy of the mixed system

matched the experimental G, while there was a significant discrepancy between the computed

MM binding energy and experimental G for DOPC system. PIE for DOPC system correlated much better with experimental binding energy than MM method, thus indicating a possible underestimating of DOPC contribution to the electrostatic binding energy. Also, a weak correlation (R2 = 0.28) between MM electrostatic binding energy and the number of hydrogen bonds was observed for the DOPC system, but PIE revealed other interactions that compensated

the lack of hydrogen bonds: CH- interactions, distorted hydrogen bonds, charge-dipole interactions and CH-carbonyl group interactions.

The FMO method provides a detailed list of interactions between the AMPs and the lipids of the membrane and provides a chemically breakdown of these interactions. The FMO results indicated that the method could be valuable to the in-silico bioactivity assessment of AMPs.

Acknowledgments: This work was supported by the Ministry of Research and Innovation through the Nucleu project PN 19 06 02 05.

Page 80: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

80

SEPTEMBER 12-14, 2019

MOLECULAR BIOLOGY AND BIOCOMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Molecular dynamics simulations on the interaction between a silver nanoparticle and lipid monolayers

Maria Mernea1, Octavian Calborean1, Speranta Avram1, Ionut Vasile2, Dan Florin Mihailescu1 1 University of Bucharest

2 Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH)

Nanoparticles (NPs) hold great potential in the biomedical fields, being promising candidates for applications such as bioimaging, biosensing, drug delivery, cancer therapy or antibacterial agents. Silver NPs present an- tibacterial properties, being effective against antibiotic resistant bacteria. One of the mechanisms mediating their toxicity against bacteria involves the disruption of the bacterial cell membranes. However, these NPs could also disrupt the membranes of eukaryotic cells which would make them unsuited for the use in humans. Experiments on the interaction between silver NPs and model membranes showed a dose-dependent disrup- tion of the lipid bilayers. Also, the lipid composition of bilayers slightly influences the interaction, negatively charged membranes presenting a stronger interaction with NPs.

To investigate the interaction between NPs and membranes is extremely important in addressing NPs toxicity. Molecular simulation techniques are very helpful in this respect, as they allow the prediction of interactions by considering different parameters like lipid composition, NP size, temperature, etc. Results could guide the experiments involving expensive reagents and complicated procedures and could give an atomic level insight on NPs mechanism of interaction with membranes.

Here we used molecular simulation techniques in order to evaluate the interaction between 5nm silver NPs known to present enhanced antimicrobial activity with monolayers comprising lipids specific to Gram-negative bacteria outer membranes (lipopolysaccharides - LPS) and lipids specific to eukaryotic membranes (phos- phatidylcholine - POPC). The two types of lipids present major structural differences. While the structure of POPC molecules consists of two fatty acid tails attached trough a glycerol molecule to o a head group com- prising choline, the LPS molecule is structured into: (i) a lipidic part formed by many fatty acids attached to a glucosamine disaccharide; (ii) an oligosaccharide core and (iii) a polysaccharide region. Additionally, LPS molecules are negatively charged, while POPC are neutral.

The silver NP was modelled using CHARMM software and the CHARMM-Metal force field. Two 15 nm × 15 nm POPC and LPS monolayers were built using CHARMM-GUI. Simulation systems were built by adding the NP at a distance of 3 nm from the monolayers. Using NAMD, we performed steered molecular dynamics (SMD) simulations in which the NP was pulled with the constant velocity of 0.00002 nm/2 fs towards each monolayer. During the 2 ns simulations, the NP encountered and passed through the monolayers. In the SMD simulations that we performed, the NP was pulled with a force that adjusts in order to maintain the constant velocity.

Page 81: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

81

SEPTEMBER 12-14, 2019

MOLECULAR BIOLOGY AND BIOCOMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Therefore, the pulling force values are informative on the resistance opposed by the monolayers to the passing of the NP.

The analysis of SMD force values derived from the simulations involving both monolayers shows that the NP crossed the LPS monolayer easier than the POPC monolayer. At the end of the simulation, in the case of the LPS monolayer, the NP coated with LPS molecules was completely detached from the monolayer, while in the simulation with POPC molecules, the POPC molecules attached to the NP were interacting with POPC molecules remaining in the monolayer.

Our results show that the LPS monolayer is more susceptible to the action of the considered NP. Future simu- lations will address the reversibility of monolayer perturbation and the effect of silver NPs on bilayers specific to bacteria and eukaryotic cells. The methodology developed in this study could be applied to investigate the interaction of other NPs with lipid monolayers and bilayers.

Acknowledgments: This work was supported by a grant of the Romanian Ministry of Research and Innovation, CCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0728/2018, contract no 63PCCDI/2018, within PNCDI III.

Page 82: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

82

SEPTEMBER 12-14, 2019

MOLECULAR BIOLOGY AND BIOCOMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Distributed bioinformatics analyses on an SGE cluster, for variant calling on bovine whole-genome samples

Alexandru Eugeniu Mizeranschi1*, Ciprian Valentin Mihali1, Radu Ionel Neamț1, Mihai Carabaș2, Daniela Elena Ilie1

1 Research and Development Station for Bovine - Arad, 310059, Arad, Bodrogului 32, Romania 2 Politehnica University of Bucharest, Splaiul Independenţei nr. 313, Bucharest, Romania

*[email protected]

The completion of the Human Genome Project [1] in 2003 has marked the beginning of the so-called post-genomic era in biology. Coupled with advancements in high-throughput DNA and RNA sequencing technologies during the last decade, this has opened up the possibility of studying the whole genome and transcriptome of an organism.

Typical applications of DNA sequencing include genome assembly and variant calling, which is the identification of single-nucleotide polymorphisms (SNPs) or insertions and deletions (indels) relative to a reference genome sequence, with uses in evolutionary biology, medicine, ecology and forensics. In medicine, especially, genome-wide association studies have been proposed for identifying SNPs and indels that are statistically associated with a visible trait or a biological condition such as the risk or resistance to disease. Similarly, RNA sequencing is used for transcriptome assembly, for the assessment of differential gene expression and differential mRNA splicing between different biological conditions (e.g. different tissues). [2]

Bioinformatics and biomedicine, through techniques of high-throughput sequencing, have become top areas in terms of the volume of data that is generated annually [3], along with other disciplines such as physics or astronomy. The ever-increasing number and sizes of sequencing datasets poses a great challenge in terms of their efficient storage, data analysis and interpretation.

A wide range of software solutions, both proprietary and open-source, have been proposed for automating the data analysis of high-throughput sequencing data [4]. The research-based, open-source solutions are also called computational bioinformatics pipelines. Although free to use, they are usually less user-friendly compared to commercial, proprietary solutions and require knowledge of computer programming and open-source operating systems such as Linux in order to be used. Moreover, due to the high complexity of sequencing data, their analysis typically requires access to high-performance computing hardware.

Bcbio-nextgen [5] is an open-source bioinformatics pipeline, implemented in Python and able to run on a wide range of hardware, from personal computers to high-performance resources such as computer clusters and shared-memory supercomputers. It automates most of the data analysis steps for DNA and RNA high-throughput sequencing data and aims to be more user-friendly compared to similar bioinformatics pipeline systems. Although it does not require

Page 83: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

83

SEPTEMBER 12-14, 2019

MOLECULAR BIOLOGY AND BIOCOMPUTING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

computer programming abilities, it does require knowledge of Linux-based systems in order to be used.

Here, we report our experiences with running the Bcbio-nextgen bioinformatics pipeline on an SGE cluster administered by the Politehnica University of Bucharest. We have performed variant calling on sequencing data for 40 Bos Taurus samples made publicly available from the 1000 Bull Genomes project [6], using the UMD3.1 (bosTau6) reference genome assembly [7]. The genome complexity of cows is comparable with that of humans in terms of total genome length and estimated number of protein-coding genes.

In order to assess the computational requirements and efficiency of running Bcbio-nextgen on the SGE cluster, we also performed analyses on smaller sets of samples and compared the runtime and used storage space. Finally, we varied the number of cluster nodes (and, implicitly, the total number of CPU cores) used for the analysis, in order to determine the horizontal scalability of Bcbio-nextgen on the SGE cluster.

We discuss our results in the context of the ever-increasing computational complexity of bioinformatics data analyses and conclude that future upgrades in high-performance computing infrastructure will be required in order to facilitate more complex scenarios involving larger numbers of biological samples.

References

[1] The 1000 Genomes Project Consortium, A global reference for human genetic variation, Nature, 526, pp. 68-74, 01 Oct 2015.

[2] E. A. Kotelnikova, M. Pyatnitskiy, A. Paleeva, O. Kremenetskaya, D. Vinogradov, Practical aspects of NGS-based pathways analysis for personalized cancer science and medicine. Oncotarget, 7(32), pp. 52493–52516, 2016.

[3] Z. D. Stephens, S. Y. Lee, F. Faghri, R. H. Campbell, C. Zhai, M. J. Efron, R. Iyer, M. C. Schatz, S. Sinha, G. E. Robinson, Big Data: Astronomical or Genomical? PLoS Biol., 13(7), 2015.

[4] J. Leipzig, A review of bioinformatic pipeline frameworks. Brief. Bioinform., 18(3), pp. 530-536, 2017.

[5] https://github.com/bcbio/bcbio-nextgen

[6] B. J. Hayes, H. D. Daetwyler, 1000 Bull Genomes Project to Map Simple and Complex Genetic Traits in Cattle: Applications and Outcomes, Annu Rev Anim Biosci, 7, pp. 89-102, 2019.

[7] A. V. Zimin, A. L. Delcher, L. Florea, D. R. Kelley, M. C. Schatz, D. Puiu, F. Hanrahan et al., A whole-genome assembly of the domestic cow, Bos taurus. Genome Biol., 10(4), R42, 2009.

Page 84: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

84

SEPTEMBER 12-14, 2019

MACHINE LEARNING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Accurate data identification in low signal-to-noise ratio series

Barnoviciu Eduard1,3, Carata Serban1,2, Ghenescu Veta2, Ghenescu Marian1,2, Mihaescu Roxana1,2, Chindea Mihai1

1 UTI GRUP 2 Institute for Space Science (ISS)

3 University Politehnica of Bucharest (UPB)

In this paper we will present our unsupervised machine-learning-based method to filter signals with a low signal-to-noise ratio in time series. We will describe both the generalized version of the algorithm and a simplified example using a discrete, bi-dimensional signal.

First, we will explain what unsupervised machine learning is and how it can be trained to differentiate the useful signal from the noise. We will describe how we can represent our data in a feature space, by extracting measurable properties of the signal and using them as features. Afterwards, we use unsupervised machine learning to group the signal into two different clusters, separating the noise from the true signal based on the aggregated differences between their features. The model is trained on data that is updated continuously and in real-time, with the need of minimal information beforehand.

Secondly, we will detail the pipeline of our algorithm. The main focus of our work was to extract as many relevant features as possible from our signal. For this, we associated signal samples in spatially correlated groups, according to the Euclidean distance between them. Each group of signal samples forms a cloud of points, with each sample being one point. Furthermore, using the Minimum Volume Enclosing Ellipsoid algorithm, we construct an ellipsoid for each cloud of points. This was done in order to extract more useful information about the signal, based on its spatial distribution. Having that ellipsoid, we were able to use its geometrical properties, such as its axis length, axis ratio and tilt angle, as new features. The next step in the filtering process was to concatenate all these features into a descriptor. A data-set was formed from the last n frames, based on which the k-means algorithm clusters all the clouds in two different groups, based on their descriptors: useful signal and noise.

Finally, in order to fairly evaluate our filtering algorithm, we created a testing environment: a synthetic signal will be generated following basic patterns with random noise added. This will help by providing some useful metrics of accuracy and will help us evaluate our proposed method. In Figure 1 we have shown a graph in which we plot the balanced accuracy against noise, for our filter. We treated 10 cases in each plot, using 10 to 100 samples of useful signal and different noise percentages, up to more than 90 percent noise. We used two signal shapes: in (a) we show the spiral pattern, while in (b) the line pattern. To appreciate the best efficiency, we extracted receiver operating characteristic in order to compute the balanced accuracy.

Page 85: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

85

SEPTEMBER 12-14, 2019

MACHINE LEARNING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Figure 1: (a) - Spiral Pattern Graph, (b) - Line Pattern Graph

Some experimental results are shown in Figure 2: (a) shows the original signal, (b) a step in the middle of processing, that highlights the ellipses being created and in (c) the final result is shown. Finally, we compared this method of clustering with regular k-means applied on raw data and obtained a significant improvement of nearly 0.3 Balanced Accuracy, from 0.6 BACC with basic clustering to 0.9, with our method. More in-depth results will be presented in the paper.

Figure 2: Spiral pattern filtering example

Page 86: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

86

SEPTEMBER 12-14, 2019

MACHINE LEARNING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Representing Character Sequences as Sets A simple and intuitive string encoding algorithm for text data cleaning

Martin Marinov1, Alexander Efremov1 1 Faculty of Automation, Technical University of Sofia,

Sofia city, Bulgaria

Background. Data preparation is vital in the field of machine learning, especially in natural language processing (NLP). In many ways, cleaning text data is still more of an art than an exact science. Most of the conventional solutions to this fundamental step of building NLP applications rely on assumptions, rules of thumb and heuristics.

This paper briefly describes a method of encoding character sequences, which is inspired by biological memory systems. Like other solutions, the algorithm relies on generalizations about the way writing systems work, as well as two very simple heuristic rules for quantifying the degree of similarity between encoded character sequences. The core principle is maximizing technical and conceptual simplicity, without significant degradation of efficacy.

Objectives. The algorithm is intended for use in simple and intuitive data preprocessing procedures. The goal is to produce tools capable of cleaning texts written in any language, which uses a phonemic writing system. Texts written in two specific languages are being processed during development and experimentation (English and Bulgarian).

User-friendliness and minimization of data obfuscation are the driving requirements of the algorithm. Another requirement is compatibility with other machine learning methods, since the algorithm is not intended for solving an entire NLP case from start to finish. It is to be used only during the data preparation phase.

Experiments. To test the viability of the approach, the task of searching for words in a dictionary was used. The problems facing any word search algorithm can be divided into two categories, based on where they originate from. Data related issues are:

1. Spelling mistakes, such as missing characters, out of place characters, or incorrect characters used in place of others by mistake.

2. Corrupted text, such as text with improper encoding or text which is intentionally made unintelligible, by the use of a cypher.

3. Word delimiters. The space character isn't a guaranteed way of separating words.

4. Identifying lemmas, regardless of the various morphological forms that they have.

5. The size of the documents that have to be processed. Scalability is a critical consideration for any text processing application.

6. Named entity recognition.

Page 87: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

87

SEPTEMBER 12-14, 2019

MACHINE LEARNING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

7. The context in which words are used.

In addition, the dictionary specific issues are:

Completeness - it is guaranteed that there will be words, which are not entered in the dictionary. It might be intentional, in the case of domain specific dictionaries, but it's still something which has to be accounted for.

Ease of compilation and maintenance, all dictionaries have to be updated from time to time, particularly ones used for semantic analysis. The ever changing vocabulary used by people is the main reason for this.

Named entity recognition.

Experiments haven't been done with corrupted documents or ciphered text. Named entity recognition is also something which hasn't been explored. It's present as both a dictionary and data issue, because the most realiable way to detect names, is to check a reference list of verified name spellings. Context is also important in recognizing names of places, people and word meanings in general, which brings point 7 into focus. The final point is perhaps the biggest stumbling block for all automated text processing applications, but the current iteration of the algorithm isn't ready to tackle it. Development is ongoing and for now, the algorithm can only handle issues of a grammatical or spelling nature.

Conclusions. The ability of the algorithm to handle data related issues has been explored, specifically spelling, word extraction and lemmatization. Overall the current implementation performs well and follows the requirements set for the development effort.

Scalability has also been examined and preliminary results indicate that the python implementation of the algorithm has linear time complexity. It’s simple enough that it can be paralelized if necessary, but such experiments haven’t been done so far.

Future work will focus on compiling a dictionary of higher quality than the one used at present and possibly annotations for certain words of interest. In addition, precise tracking of character positions will also be incorporated into the string encoding algorithm.

Page 88: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

88

SEPTEMBER 12-14, 2019

MACHINE LEARNING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Speeding up atomistic DFT simulations by machine learning methods

Tudor Luca Mitran1, George Alexandru Nemneș1,2 1.Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering,

077126 Măgurele-Ilfov, Romania 2 University of Bucharest, Faculty of Physics, MDEO Research Center,

077125 Măgurele-Ilfov, Romania

Numerical ab initio DFT methods have proven to be some of the most successful ways of accurately and efficiently determining material properties. Although unquestionably useful, they are hindered by high computational costs and poor scaling that only make them practically applicable to systems of at most hundreds of atoms. The same high computational cost hinders their use in high-throughput simulations, where one needs to test thousands of systems in order to optimize a particular desired physical property.

In recent years, different machine learning (ML) methods have been successfully used to dramatically speed up computing times for a wide variety of problems, including DFT simulations. In our current work, neural network algorithms are used in order to predict the energy gaps (differences between HOMO and LUMO energies) for graphene nanoflakes with embedded hexagonal boron nitride domains (Fig. 1).

Fig.1 Predicted energy gap (red) has a high accuracy, having an R2 coefficient of 89%.

Different features are tested in order to better pre-process the inputs and several network parameters are investigated in order to increase the accuracy of the simulation output.

In a more complex approach, we also investigate the possibility of estimating the electron ground state density n(ri) starting with with the initial electron density n(r0) of the isolated atoms (Fig. 2). This ability would be highly desirable since n(ri) directly determines all material properties

Page 89: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

89

SEPTEMBER 12-14, 2019

MACHINE LEARNING

ICASC 2019, Sinaia, Romania, 12-14 September 2019

that are of interest and would also present a substantial computational speedup if used as an initial input density in DFT simulations.

Fig.2 Predicted electron density gap (red) has a high accuracy when compared to the computed density, having an R2 coefficient of 82%.

Such ML methods are of high practical importance since one can use them to quickly but approximately gauge certain physical properties, as in the case of directly predicting the bandgap from the atomic configuration, or to employ them as a speed up method for more precise DFT simulations.

Page 90: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

90

SEPTEMBER 12-14, 2019

SOFTWARE APPLICATIONS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Software developments for experimental data processing in NICA projects

Mikhail Kapishin1, Vasilisa Lenivenko1, Vladimir Palichik2, Valery Panin3, Nikolay Voytishin2 1 Laboratory of High Energy Physics Joint Institute for Nuclear Research

6 Joliot-Curie St, Dubna, Moscow Region, Russia 141980 2 Laboratory of Information Technologies

Joint Institute for Nuclear Research 6 Joliot-Curie St, Dubna, Moscow Region, Russia 141980

3 CEA 91191 Gif-sur-Yvette, Saclay, France

BM@N (Baryonic Matter at Nuclotron) [1] is a fixed target experiment. It is the first data taking experiment of the NICA-Nuclotron-M (Nuclotron-based Ion Collider fAсility) accelerator complex. Its main goal is to study the properties of hadrons and the formation of (multi) strange hyperons on the threshold of the birth of hypernuclei.

The processing of experimental data is a cruсial task for any experiment. Experimental and simulated data in the BM@N experiment are processed by using the BMNROOT software package. It includes all the required modules for fast and precise data acquisition, decodification, reconstruction and further physical analysis.

The entire chain will be considered on the example of one detector system of the BM@N setup, the Drift Chambers. This system is the main detector of the setup that formed the outer tracker during the last physical runs. A special reconstruction algorithm for these detectors was developed and implemented into the official BM@N software package. Before proceeding to the reconstruction of physical data, the main detector performance parameters were estimated. These parameters are presented along with some additional commissioning procedures. Once the reconstruction for drift chambers was finalized and optimized, the reconstructed objects from Drift Chambers were used in the particle identification chain.

In the last physical run, which ended in April 2018, besides the main BM@N physical program, the first measurement of Short-Range Correlations (SRC) in carbon nuclei was carried out. About 20% of the nucleons of the carbon nuclei are located, at every time moment, in intensely interacting SRC pairs [2], which are characterized by a large absolute and a small momentum of the center-of-mass system in comparison with the Fermi momentum [3]. Traditionally, the properties of the SRC pairs are studied in hard scattering reactions when a particle from the beam (electron or proton) interacts with one nucleon of a nucleus. In the BM@N experiment, inverse kinematics was used: a carbon ion beam collided with a liquid hydrogen target, while the nucleus, after the interaction, continued to move forward and was recorded by tracking and time-of-flight detectors of the BM@N setup. The properties of the residual nucleus at a beam momentum of 4 GeV/s/nucleon have never been studied before.

Page 91: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

91

SEPTEMBER 12-14, 2019

SOFTWARE APPLICATIONS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

A brief overview of the preliminary results of analyzing the data of the first SRC measurement at BM@N will be presented.

References

[1] M. Kapishin, Eur. Phys. J. A52, 213--219 (2016);

[2] Hen, et al., Reviews of Modern Physics (2016);

[3] Henry R. Glyde-Intermediate Condensed Matter Physics, Chapter 8 (Solid State Physics N. Ashcroft and N. D. Mermin (1976).

Page 92: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

92

SEPTEMBER 12-14, 2019

SOFTWARE APPLICATIONS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Improving cooling by airflow control inside the Data Center - model and simulation

Mihail-Radu-Cătălin Trușcă1, Jefte Nagy1, Ştefan Albert1 and Felix Fărcaș1 1 National Institute for R&D of Isotopic and Molecular Technologies (INCDTIM)

67-103 Donat str., 400293 Cluj-Napoca, Romania

The cooling infrastructure is one of the vital support subsystems and a major cost factor in

Data Centers. If it is not properly implemented, the power required to cool the equipments can

significantly impact the power usage effectiveness (PUE) of the Data Center, becoming one of

the limiting factors for its computing capacity [1,2]. Here we report the optimization of the air

cooling system of the INCDTIM Data Center, going through the following stages:

(i) realizing the 3D CAD model of the current distribution of airflow speed and

temperature based on the input data collected by sensors;

(ii) designing different solutions for directing the airflow in the Data Center, in order to

achieve a better separation of the cold and hot regions;

(iii) simulation of the temperature variation using the cooling solutions developed using

ANSYS CFD software;

(iv) practical implementation of the best cooling solution found;

(v) monitoring the system operation and adjusting the cooling parameters according to

the values obtained in the real system.

The aim of this work was to achieve a better management of the temperature variation

inside the Data Center by controlling of the air flow, and obtaining the maximum cooling capacity

according with the 3D model of the Data Center. The accuracy and limits of the discretized 3D

modeling were also discussed.

Acknowledgement: This work was funded by the Ministry of Research and Innovation

under the contract no. 6 / 2016 (program PNIII-5.2-CERN-RO) and Romanian-JINR cooperation

project (Order 396-73/27.05.2019 and Order 397-75/27.05.2019).

References

[1] Nadjahi, C., Louahlia, H., Lemasson, “A review of thermal management and innovative cooling strategies for data center”, Sustainable Computing: Informatics and Systems 19 (2018) 14-28

[2] Wen-Xiao Chu, Chi-Chuan Wang, “A review on airflow management in data centers”, Applied Energy 240 (2019) 84–119

Page 93: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

93

SEPTEMBER 12-14, 2019

SOFTWARE APPLICATIONS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Detection and Validation of Asteroids using the NEARBY Software Platform

Afrodita Liliana Boldea1,2, Ovidiu Vaduvescu3,4,5, Costin Radu Boldea2

1 National Institute for Nuclear Physics and Engineering, Str. Reactorului, 30, P.O. Box MG-6, RO-077125 Bucharest-Magurele, Romania

2 University of Craiova, A.I.Cuza 13, RO-200385, Craiova, Romania 3 Isaac Newton Group of Telescopes (ING), Apartado de Coreos 321, E-38700 Santa Cruz de la Palma,

Canary, Islands, Spain 4 Instituto de Astrofisica de Canarias (IAC), vía Láctea s/n, E-38200 La Laguna, Tenerife, Spain

5 PhD Coordinator, University of Craiova, A.I.Cuza 13, RO-200385, Craiova, Romania

The paper presents the results of the testing, calibration and validation phase of the new NEARBY asteroid detection platform, an experimental study that used data collected by the observing infrastructure of the Instituto de Astrofisica de Canarias, and Isaac Newton Group (ING) of Telescopes from La Palma, Spain.

The NEARBY platform provides to astronomers an integrate framework that enable them to easily process large amounts of high resolution astronomical images with the purpose of identifying Near Earth Asteroids and other moving object from the Solar System. This platform has been developed in 2017-2018 through a collaborative research work between Technical University of Cluj-Napoca and University of Craiova.

NEARBY platform was projected on three hierarchical layers, each one on the top of other. The data layer supports storage of all the information, relied on MySQL server for storing metadata and available to the logic layer via API calls and is exposed to users through the graphical user interface. The logic layer contains execution modules that enable processing of survey data (i.e. image reduction, field correction, source detection and automatic detection of asteroids). The user’s layer is developed using HTML5 and Bootstrap and contain the validation module that permits human validation of the detections. The NEARBY platform was implemented as an Open-Stack Cloud System, encapsulated in several Docker containers and uses a Kubernetes Based Microservice Management System

The last stage of the development of this platform was the testing and validation of NEARBY functionality during two mini-surveys proceeded by the ING telescope that took place between 1-6 November and 26-30 December 2018. These surveys covered more than 700 astronomic Fields (about 200 sq. deg.)

Page 94: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

94

SEPTEMBER 12-14, 2019

SOFTWARE APPLICATIONS

ICASC 2019, Sinaia, Romania, 12-14 September 2019

The paper presents in order:

a. the statistical results of NERARBY tests (number of validated object versus fake discovery, residual analysis, with a detection performance of 100%) concerning the and the results of comparative study with Astrometrica (a standard tool for asteroids detection, with a detection performance of 95%) by a team that includes the authors of tis paper, a number of amatory astronomers and a group of students in Computer Sciences.

b. the consequent adaptations of NEARBY platform derived from the tests (adaptation of field recognition and image correction to the user interface, control of animated blinking, automat identification of known asteroids)

Fig. 1. The field correction of images distortion determined by mirror curvature. The field recognition and image correction

could be checked by the assistant reducer who can easily access the distortion field maps from the field interface

c. the first astronomical discovery obtained using this platform (first Three Near Earth Asteroids discovered in November and other Mars Crosser Objects ).

Fig. 2 The first NEA discovered E522022 (2018VN3) by the NEARBY platform and validated

by us. The left image presents the NEARBY validation interface; the right image is his orbit

representation, crossing the ecliptic plane between the orbit of Earth (in blue)

and Mars (in red)

The last part of the paper is presented a list of future upgrades and modificatios of the platform, work in progress.

Page 95: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

AUTHOR INDEX

ICASC 2019, Sinaia, Romania, 12-14 September 2019

A. Dolbilov .............................................................................................................................. 72

A. Golunov .............................................................................................................................. 72

A. Góźdź ........................................................................................................................... 46, 48

A.A. Gusev ........................................................................................................................ 46, 48

Adrian Sevcenco ..................................................................................................................... 28

Afrodita Liliana Boldea ........................................................................................................... 93

Albert Stefan .......................................................................................................................... 33

Alexander Efremov ................................................................................................................. 86

Alexandra Mirzac ................................................................................................................... 59

Alexandru Eugeniu Mizeranschi ............................................................................................. 82

Alexei Zubarev .................................................................................................................. 57, 61

Andrey Dolbilov ...................................................................................................................... 17

Aurelian Isar .......................................................................................................... 55, 56, 57, 61

Barnoviciu Eduard .................................................................................................................. 84

Beatrice Paternoster ........................................................................................................ 36, 40

Bertrand de Boisdeffre ........................................................................................................... 70

Carata Serban ......................................................................................................................... 84

Chindea Mihai ........................................................................................................................ 84

Ciprian PINZARIU .................................................................................................................... 19

Ciprian Pinzaru ....................................................................................................................... 31

Ciprian Valentin Mihali ........................................................................................................... 82

Costin Carabas ........................................................................................................................ 29

Costin Grigoras ....................................................................................................................... 24

Costin Grigoraş ....................................................................................................................... 25

Costin Radu Boldea ................................................................................................................ 93

Dajana Conte .................................................................................................................... 36, 40

Dan Florin Mihailescu ............................................................................................................. 80

Dana Petcu ............................................................................................................................. 64

Daniela Elena Ilie .................................................................................................................... 82

Daniele Cesini ......................................................................................................................... 63

Daniel-Florin Dosaru .............................................................................................................. 25

Dimitar Bakalov ...................................................................................................................... 51

Page 96: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

AUTHOR INDEX

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Dmitry Podgainy ..................................................................................................................... 17

Dragos Ciobanu-Zabet ...................................................................................................... 21, 76

Elena Cecoi ............................................................................................................................. 56

Fakhrodin Mohamadi ............................................................................................................. 40

Farcas Felix ............................................................................................................................. 33

Felix Fărcaș ............................................................................................................................. 92

Fernando Aguilar .................................................................................................................... 66

Frédéric Derue ........................................................................................................................ 23

G. Chuluunbaatar ............................................................................................................. 46, 48

Gabriel Iuhasz ......................................................................................................................... 64

George Alexandru Nemneș .................................................................................................... 88

George Necula ........................................................................................................................ 78

Georgios Kolliopoulos............................................................................................................. 70

Gh. Adam .......................................................................................................................... 53, 68

Ghenescu Marian ................................................................................................................... 84

Ghenescu Veta ....................................................................................................................... 84

Gheorghe Adam ..................................................................................................................... 17

I. Kashunin .............................................................................................................................. 72

Ionut Vasile ................................................................................................................. 21, 76, 80

J. Buša ..................................................................................................................................... 68

Jefte Nagy ............................................................................................................................... 92

L. Gr. Ixaru .............................................................................................................................. 34

L.L. Hai .............................................................................................................................. 46, 48

Latchezar Betev ...................................................................................................................... 24

Leila Moradi ............................................................................................................................ 40

Lorant Janosi .......................................................................................................................... 78

Loredana Mocean .................................................................................................................. 74

Maria Mernea......................................................................................................................... 80

Marina Cuzminschi ........................................................................................................... 57, 61

Marnix Van Daele ............................................................................................................. 42, 50

Martin Marinov ...................................................................................................................... 86

Mihaela Bacalum .................................................................................................................... 78

Page 97: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

AUTHOR INDEX

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Mihaescu Roxana ................................................................................................................... 84

Mihai A. Macovei ............................................................................................................. 56, 59

Mihai Carabas......................................................................................................................... 29

Mihai Carabaş......................................................................................................................... 25

Mihai Carabaș......................................................................................................................... 82

Mihai Ciubancan..................................................................................................................... 21

Mihai Ciubăncan..................................................................................................................... 26

Mihai Radu ............................................................................................................................. 78

Mihail-Radu-Cătălin Trușcă .................................................................................................... 92

Mihnea Dulea ..............................................................................................................21, 26, 76

Mikhail Kapishin ..................................................................................................................... 90

Miranda-Petronella Vlad ........................................................................................................ 74

Mohamad K. El-Daou ............................................................................................................. 44

N.S. Scott ................................................................................................................................ 15

Nagy Jefte ............................................................................................................................... 33

Nicolae Tapus ......................................................................................................................... 29

Nicolae Ţăpuş ......................................................................................................................... 25

Nikolay Kutovsky .................................................................................................................... 17

Nikolay Voytishin .............................................................................................................. 17, 90

O. Chuluunbaatar ........................................................................................................46, 48, 68

Octavian Calborean ................................................................................................................ 80

Octavian RUSU ................................................................................................................. 19, 31

Ovidiu Vaduvescu ................................................................................................................... 93

P. W. Wen............................................................................................................................... 46

P. Zrelov.................................................................................................................................. 68

P.M. Krassovitskiy ............................................................................................................ 46, 48

Paul GASNER .................................................................................................................... 19, 31

R. G. Nazmitdinov ................................................................................................................... 46

Radu Ionel Neamț .................................................................................................................. 82

Raffaele D’Ambrosio ............................................................................................................. 36

S. Adam .................................................................................................................................. 53

S.I. Vinitsky ....................................................................................................................... 46, 48

Page 98: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

AUTHOR INDEX

ICASC 2019, Sinaia, Romania, 12-14 September 2019

Speranta Avram ...................................................................................................................... 80

Ştefan Albert .......................................................................................................................... 92

T. Strizh ................................................................................................................................... 72

T.T. Lua ................................................................................................................................... 48

Tatiana Strizh .......................................................................................................................... 17

Toon Baeyens ................................................................................................................... 42, 50

Trusca Radu ............................................................................................................................ 33

Tudor Luca Mitran .................................................................................................................. 88

V. Korenkov ............................................................................................................................ 72

V. Mitsyn ................................................................................................................................ 72

V.L. Derbov ....................................................................................................................... 48, 46

Valeriu VRACIU ................................................................................................................. 19, 31

Valery Mitsyn ......................................................................................................................... 17

Valery Panin............................................................................................................................ 90

Vasilisa Lenivenko .................................................................................................................. 90

Viorel Ciornea ......................................................................................................................... 56

Vladimir Korenkov .................................................................................................................. 17

Vladimir Melezhik ................................................................................................................... 38

Vladimir Palichik ..................................................................................................................... 90

Page 99: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and
Page 100: DEPARTMENT OF COMPUTATIONAL PHYSICS AND …icasc2019.ifin.ro/Book_of_Abstracts.pdf · Speaker: DAN BOGDAN, ISG Technology Consulting manager, DELL EMC Romania Accelerating HPC and

ISBN 978-973-0-30119-9