scalable tridas for the nemo projectricap09.roma2.infn.it/slides/chiarusi_ricap2009r.pdf · n. of...
Post on 15-Jul-2020
0 Views
Preview:
TRANSCRIPT
Scalable TriDAS for the NEMO Project
Tommaso ChiarusiINFN Sez. Bologna & Phys. Dep. Univ. Bologna
for the NEMO Collaboration
Monte Porzio Catone, Villa Mondragone 14/05/2009
Talk overview
Neutrino Telescope: just add water...all data to shore: challenging throughput TriDAS base conceptFrom NEMO Ph.1 to the scalable km3 TriDAS architecture
Tommaso Chiarusi RICAP 2009
Cherenkov light
! "
Upgoing "
seabed depth > 3000 m
Atmospheric downgoing "
Less atm. bkg. when deeper ( below 3000 m usl)Expected signal > atm. bkg. when E!>10 TeV !Telescope Vol > 1 km3 for some evts/year ! Many PMTs (> 5000) ! complex experimental setup!NEMO location: CAPO PASSERO @ 3400 m usl
See Paolo Piatteli’s talk in plenary session todayNEMO
Capo Passero Site
100 km
Tommaso Chiarusi RICAP 2009
some km3 detector possible layouts:
Det.Unit
cable PJB-SJB
cable SJB-DU
Square Grid
Hexagonal Grid90 Detection Units80 PMTs / D.U.7200 PMTs
81 Detection Units80 PMTs / D.U.6480 PMTs total
Tommaso Chiarusi RICAP 2009
One muon event (only muon hits shown, no background)theTelescope is made of
DETECTION UNITS
20 floors, with 4 10” PMTs each
Tommaso Chiarusi RICAP 2009
Expected Optical Data sources bioluminescence (neglig. @ depth !2500 m) upgoing neutrinos (atm. + signal) (50 day-1 / km3) atmospheric muons (10÷100 Hz / km3) 40K decays (30÷40 kHz / 10” PMT @ 0.5 p.e )
Hit samples Vs. pulse time on a 10” PMT with 0.5 p.e. thr. (NEMO Ph.1 data)
S.P.E. wave form(~16 samples = 16 Bytes)
S.P.E. HIT SIZE:Hit PMT Info
+ Hit Time
+Hit Charge
+Hit Wave Form(samples)
28 BytesTommaso Chiarusi RICAP 2009
Expected Data Rates10” PMT
single rate(kHz)
Data Rateper PMT(Mbps)
Data Rate per Floor*
(Mbps)
Data Rateper D.U.**
(Gbps)
Data Rate per km3 ***
(Gbps)
40 8.8 35.2 0.7 70
80 16.8 67.2 1.3 130
150 32 128 2.5 250
300 64 256 5.0 500
** 20 Floor/D.U. * 4 PMT/ Floor ** 100 D.U.
Tommaso Chiarusi RICAP 2009
(bare 40K)
(NEMO Ph. 1)
(present DAQ)
(expanded DAQ )
The TriDAS reduces the data rate by filtering the data stream bunched in Time Slice (TS).
A TS contains all data from all ( or part of) the detector occurred in a given time interval (~100 ÷200 ms).
WHAT IS SCALABLE?
The TriDAS principal elements:
- Trigger System Controller (TSC): monitors and serves the TriDAS - (1 GbE I/O)
- Hit Managers (HM): receive optical data; prepare and distribute the TS to the TCPU
- TriggerCPUs (TCPU): apply the trigger logics to the assigned TS and transfer the selected data to the EM;
- Event Manager (EM): receives the triggered data from TCPUs and build the Post-Trigger events data file
Atmospheric Muon Signal (@ 3000 m depth) assuming! - Rate µ atm:! ! 100 Hz
! ! ! - RateK40 : ! ! 300 kHz
! ! ! - N. PMTs:! ! 8000
! - Rec. time window:! 6 µs
Post-Trigger data rate: ~ 40 MBps
Stored data /day ~1 TB !!!!
Upgoing Neutrino Signal assuming! - Rate ! :! ! < 4 10-3 Hz
! ! ! - RateK40 : ! ! 300 kHz
! ! ! - N. PMTs:! ! 8000
! ! - Rec. time window:! 6 µs
Post-Trigger data rate: ! 2 kBps
Stored data /day ! 150 MB !!!!
Tommaso Chiarusi RICAP 2009
HM 1 HM 1
HM 1 HM 1
HM 2 HM 2
HM 2 HM 2
HM 3 HM 3
HM 3 HM 3
TCPU 1 TCPU 1
TCPU 1 TCPU 1
TCPU 2 TCPU 2
TCPU 2 TCPU 2
TCPU 3 TCPU 3
TCPU 3 TCPU 3
DU-TS i, 1 DU-TS i, 2 DU-TS i, 3
Full TS i
Time slice and Barrel Shift Paradigm
If a TCPU needs more time, just add one more!
[refer to: M.F. Letheren, 1995 CERN School of Computing]
Tommaso Chiarusi RICAP 2009
TriDAS for NEMO Ph.1
Gbit switch
Master CPU <70 MB/s
Hard
Disk
Hard
Disk
CPU
PC Floor 1
PC Floor 2
< 70 MB/s
Windows
Linux
PC Monitor
DM - RC
PC Floor 3
PC Floor 4
The maximum throughput of the MiniTower was " 512 Mbps
One Machine, the Master CPU, played all the rules: Hit Manager, Trigger, Event Manager
CABLE
Floor 1
Floor 2
Floor 3
Floor 4
Tommaso Chiarusi RICAP 2009
TriDAS for a first prototype Detection Unit with 16 Floors; expected through put " 2 Gbps standard 1 GbEthernet Networking
Floor 1
Floor 2
Floor 3
Floor 4
Floor 5
Floor 6
Floor 7
Floor 8
Floor 9
Floor 10
Floor 11
Floor 12
Floor 13
Floor 14
Floor 15
Floor 16
Shore
interfaces to:
Tommaso Chiarusi RICAP 2009
Possible Network infrastructure for km3 TriDAS (90 D.U.)
Tommaso Chiarusi RICAP 2009
0
10
20
30
40
50
60
0 50 100 150 200 250 300 350
Single Rate on a PMT (kHz)
N.
of
TC
PU
s
!KNCNPMTNTCPU = RCPU
RCPU SHitNACC = Drate HM#TCPU
Drate HM#TCPU = 10 GbpsSHit= 224 b
RCPU = 3 GHzNACC= 70
NPMT=8000NC=NACC=70
Number of Available Clock Cycles per CPU per TS:
Estimed number of necessary TCPU:
If the req. NC>NACC(i.e. trigger algo. is slow)
ADD TCPU!Tommaso Chiarusi RICAP 2009
Monitoring
Dispatcher for
TriDAS status
Dispatcher for
Physics Data
Trigger Time Window
transferred to EM
events written and
stored
Time Slices
tranfser to TCPU
TCPU
Via ControlHost: a Tag Controlled Data Dispatching [V. Maslenikov et al. CASPUR, http://afs.caspur.it/temp/ControlHost.pdf]
Tommaso Chiarusi RICAP 2009
Multi purpose on-line Visualizer ( with D. Bonfigli - UniBo)
On(Off)-line Event Display ( with A. Riccardo - UniBo)
Tommaso Chiarusi RICAP 2009
Conclusions“all data to shore” is a challenging BUT feasible strategy for km3 underwater !-telescope;scalable TriDAS architecture suppliyng high data-stream (up to 500 Gbps) is possible and affordable with the present technology;The NEMO Collaboration is completing a scaled TriDAS for the prototype Detection Unit.
Tommaso Chiarusi RICAP 2009
G. Priede et al., Deep-Sea Research I 55 (2008) 1474–1483
top related