iter control system technology study klemen Žagar [email protected]

17
ITER Control System Technology Study Klemen Žagar [email protected]

Upload: bonnie-daniel

Post on 12-Jan-2016

221 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

ITER Control System Technology Study

Klemen Ž[email protected]

Page 2: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

Overview

About ITER

ITER Control and Data Acquisition System (CODAC) architecture

Communication technologies for the Plant Operation Network Use cases/requirements Performance benchmark

EPICS Collaboration Meeting, Vancouver, April 2009 2

Page 3: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

A Note!

Information about ITER and CODAC architecture presented here-in is a summary of ITER Organization’s presentations

Cosylab prepared studies on communication technologies for ITER

3EPICS Collaboration Meeting, Vancouver, April 2009

Page 4: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

4

About ITER (International Thermonuclear Experimental Reactor)

EPICS Collaboration Meeting, Vancouver, April 2009

Page 5: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

5

Toroidal Field CoilNb3Sn, 18, wedged

Central SolenoidNb3Sn, 6 modules

Poloidal Field CoilNb-Ti, 6

Vacuum Vessel9 sectors

Port Plug heating/current drive, test blanketslimiters/RHdiagnostics

Cryostat24 m high x 28 m dia.

Blanket440 modules

Torus Cryopumps, 8

Major plasma radius 6.2 m

Plasma Volume: 840 m3

Plasma Current: 15 MA

Typical Density: 1020 m-3

Typical Temperature: 20 keV

Fusion Power: 500 MW

Machine mass: 23350 t (cryostat + VV + magnets)- shielding, divertor and manifolds: 7945 t + 1060 port plugs- magnet systems: 10150 t; cryostat:  820 t

Divertor54 cassettes

29m

~28m

About ITER

EPICS Collaboration Meeting, Vancouver, April 2009

Page 6: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

6

CODAC Architecture

EPICS Collaboration Meeting, Vancouver, April 2009

Page 7: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

Plant Operation Network (PON)

Command Invocation Data Streaming

Event Handling Monitoring Bulk Data Transfer

PON self-diagnostics Diagnosing problems in the PON Monitoring the load of the PON network

Process Control Reacting on events in the control system by issuing commands or

transmitting other events Alarm Handling

Transmission of notification of anomalous behavior Management of currently active alarm states

7EPICS Collaboration Meeting, Vancouver, April 2009

Page 8: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

Prototype and Benchmarking

We have measured latency and throughput in a controlled test environment Allows side-by-side comparison Also, hands-on experience is more comparable

Latency test: Where a central service is involved (OmniNotify, IceStorm or EPICS/CA):

Send a message (to the central service) Upon receipt on the sender node, measure difference between send and

receive times Without a central service (OmniORB, ICE, RTI DDS):

Round-trip test Send a message (to the receiving node) Respond Upon receipt of the response, measure the difference

Throughput test: Send messages as fast as possible Measure differences between receive times

Statistical analysis to obtain average, jitter, minimum, 95th percentile, etc.

8

Page 9: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

Applicability to Use Cases

9

CHANNEL ACCESS

omniORB CORBA

RTI DDS ZeroC ICE

Command invocation

4/2 5 /5 4/3 5/5

Event handling 4/3 4/4 5/4 4/5

Monitoring 5/5 (EPICS) 5/5 (TANGO) 5/3 5/3

Bulk data transfer

5/3 4/4 5/4 4/4

Diagnostics 5 4 5 3

Process control 5 (EPICS) 5 (TANGO) 4 3

Alarm handling 5 (EPICS) 5 (TANGO) 3 3

First number: performance Second number: functional

applicability of the use case

1. not applicable at all

2. applicable, but at a significant performance/quality cost compared to optimal solution; custom design required

3. applicable, but at some performance/quality cost compared to optimal solution; custom design required

4. applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design

5. applicable, and close to optimal solution; use case foreseen in design

Page 10: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

Applicability to Use Cases

10

CHANNEL ACCESS

omniORB CORBA

RTI DDS ZeroC ICE

Command invocation

4/2 5 /5 4/3 5/5

Event handling 4/3 4/4 5/4 4/5

Monitoring 5/5 (EPICS) 5/5 (TANGO) 5/3 5/3

Bulk data transfer

5/3 4/4 5/4 4/4

Diagnostics 5 4 5 3

Process control 5 (EPICS) 5 (TANGO) 4 3

Alarm handling 5 (EPICS) 5 (TANGO) 3 3

First number: performance Second number: functional

applicability of the use case

1. not applicable at all

2. applicable, but at a significant performance/quality cost compared to optimal solution; custom design required

3. applicable, but at some performance/quality cost compared to optimal solution; custom design required

4. applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design

5. applicable, and close to optimal solution; use case foreseen in design

Page 11: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

PON Latency (small payloads)

11EPICS Collaboration Meeting, Vancouver, April 2009

0

100

200

300

400

500

600

700

0 500 1000 1500 2000 2500

Late

ncy

[ms]

Payload size [bytes]

ICE

OmniORB

RTI DDS

Commercial DDS II

Page 12: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

PON Latency (small payloads)

12

100

150

200

250

300

350

400

450

500

0 500 1000 1500 2000 2500

Late

ncy

[ms]

Payload size [bytes]

EPICS (sync)OmniNotifyIceStorm

EPICS Collaboration Meeting, Vancouver, April 2009

Ranking:1. OmniORB (one way invocations)2. ICE (one way invocations)3. RTI DDS (not tuned for latency)4. EPICS5. OmniNotify6. ICE storm

Page 13: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

PON Throughput

13EPICS Collaboration Meeting, Vancouver, April 2009

0,0%

10,0%

20,0%

30,0%

40,0%

50,0%

60,0%

70,0%

80,0%

90,0%

100,0%

10 100 1.000 10.000 100.000 1.000.000

1Gbp

s lin

k uti

lizati

on

Payload size [bytes]

OmniORB (oneway)

RTI DDS (unreliable)

ICE (oneway)

Commercial DDS II (unreliable)

Page 14: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

PON Throughput

14

0,0%

10,0%

20,0%

30,0%

40,0%

50,0%

60,0%

70,0%

80,0%

10 100 1.000 10.000 100.000 1.000.000

1Gbp

s lin

k uti

lizati

on

Payload size [bytes]

EPICS (async)EPICS (sync)IceStormOmniNotify

EPICS Collaboration Meeting, Vancouver, April 2009

Ranking:1. RTI DDS2. OmniORB (one way invocations)3. ICE (one way invocations)4. EPICS5. ICE storm6. OmniNotify

Page 15: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

PON Scalability

RTI DDS efficiently leverages IP multicasting

(source: RTI)

15EPICS Collaboration Meeting, Vancouver, April 2009

With technologies that do not use IP multicasting/broadcasting, per-subscriber throughput is inversely proportional to the number of subscribers!

(source: RTI)

Page 16: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

EPICS

Ultimately, ITER Organization has chosen EPICS: Very good performance. Easiest to work with. Very robust. Full-blown control system infrastructure (not just middleware). Likely to be around for a while (widely used by many labs).

Where EPICS could improve? Use IP multicasting for monitors. A remote procedure call layer (e.g., “abuse” waveforms to

transmit data serialized with with Google Protocol Buffers, or use PVData in EPICSv4).

16EPICS Collaboration Meeting, Vancouver, April 2009

Page 17: ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com

17

Thank You for Your Attention