control status readiness report

40
Control Status Readiness Report 12 June 2008 LHC – MAC Eugenia Hatziangeli on behalf of the CERN Accelerator and Beams Controls Group 1

Upload: george-erickson

Post on 02-Jan-2016

16 views

Category:

Documents


2 download

DESCRIPTION

Control Status Readiness Report. Eugenia Hatziangeli on behalf of the CERN Accelerator and Beams Controls Group. Outline. LHC controls infrastructure – overview Status Report on Core Controls Front ends Hardware and Software Databases Industrial Controls Machine Interlocks - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Control Status Readiness Report

Control Status Readiness Report

12 June 2008 LHC – MAC

Eugenia Hatziangelion behalf of

the CERN Accelerator and BeamsControls Group

1

Page 2: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 2E. Hatziangeli AB/CO

Page 3: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 3E. Hatziangeli AB/CO

Page 4: Control Status Readiness Report

LHC controls infrastructure – Overview• The 3-tier architecture

– Hardware Infrastructure– Software layers

– Resource Tier– VME crates, PC GW & PLC dealing

with high performance acquisitions and real-time processing

– Database where all the setting and configuration of all LHC device exist

– Server Tier– Application servers– Data Servers– File Servers– Central Timing

– Client Tier– Interactive Consoles– Fixed Displays– GUI applications

– Communication to the equipment goes through Controls MiddleWare CMW

CTRL CTRL

DB

Business Layer

Hardware

Client tier

Server tier

Applications Layer

Resource tier

CMW

12 June 2008 4E. Hatziangeli AB/CO

Page 5: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 5E. Hatziangeli AB/CO

Page 6: Control Status Readiness Report

Front-Ends Hardware and Software

• Hardware Installations– All Front-end controls equipment in place (> 200 VMEBus systems and > 250 industrial

PCs)– WorldFIP infrastructure operational for PO, QPS, Cryogenics and BI (400 km network,

20.000 nodes)– General Machine Timing (GMT) network operational, including transmissions for LHC

Collimators and for LHC Experiments

• Front End Software Architecture (FESA)– FESA V2.10 Framework operational, including support for machine critical settings,

transactional commands and in-depth diagnostics of RT behavior (via Alarms system LASER)– Front-End FESA classes developed by AB equipment groups (> 250 classes deployed on >

400 front-ends)– Deployment process =>AB-CO supports 3 last FESA releases– All industrial PCs running now Linux O/S

• On-going Actions– Two major tendering exercise for the procurement of AB front-end hardware

(adjudication during CERN FC in September 2008)

12 June 2008 E. Hatziangeli AB/CO 6

Page 7: Control Status Readiness Report

Accelerator Databases Readiness

Off-line DatabasesLayout• Racks & electronics incorporated

up to a high level of detail• Layout data is now used as

foundation for the controls system

Still to do• More data is being captured

relating layout and assets information

• Tools for data maintenance still to be put in place

Online DatabasesOperational Settings• Data model enhanced to cover

functional extensions for Role Based Access (RBAC) , XPOC, Sequencer

• PS Controls Renovation requirementsLogging Service• New database hosting since Mar. 08• Common logging infrastructure for

the complete accelerator chain• Sustained increasing logging

requirements for HWC& beam data• Improved data retrieval tool

12 June 2008 E. Hatziangeli AB/CO 7

Page 8: Control Status Readiness Report

12 June 2008

Service Availability and Data Security

Controls ConfigurationLSA SettingsE-LogbookCESAR

HWC MeasurementsMeasurements

Logging

CTRL CTRL

CTRL CTRL 2 x quad-core 2.8GHz CPU 8GB RAM

Clustered NAS shelf14x300GB SATA disks

11.4TB usable

Additional server for testing: Standby database for LSA

Clustered NAS shelf14x146GB FC disks

• Service Availability – New infrastructure has high-redundancy for high-availability– Deploy each service on a dedicated Oracle Real Application Cluster– The use of a standby database will be investigated

• objective of reaching 100% uptime for LSA

• Secure database account granting specific privileges to dedicated db accounts• DIAMON agent on Oracle Application Servers• For all CO databases

• CO puts the UR, and pays for the hardware• IT chooses the hardware, hosts, supports and maintains

8E. Hatziangeli AB/CO

Page 9: Control Status Readiness Report

Industrial Controls

• Industrial Controls for LHC have reached a high level of maturity • All systems, fully deployed for HWC in 2007, are presently in their operational

version– Machine protection (PIC, WIC, QPS, Circuits)– Collimator Environment Monitoring Package (temperature, water cooling)– Survey– Cryogenic controls– Cryogenics Instrumentation Expert Tools (CIET)

• Most of the SCADA applications have been ported to Linux• The front-end FESA software has been ported to Linux• Migration to last version of FESA (v2.10) to be done for next shutdown• The interface toward logging database has been consolidated• DIAMON is used for diagnostics

– PLC agents are available, tested and ready to be deployed– PVSS diagnostics will be soon available

12 June 2008 E. Hatziangeli AB/CO 9

Page 10: Control Status Readiness Report

RM78Sector 78 (3.3 Km)

Cryogenics Control System

10

LHCA

QURA

LHCCA

QURCA

QSCCA

LHCCB

QSCCB

LHCB

QSRB

QSCB

QUI

QSAA

Comp 4.5K Comp 1.8KMain DryerComp 1.8K Comp 4.5K

QURCB

Cold Box 4.5K

LN2 Buffer

CB 1.8KCB 1.8K

Connection Box

UCB 4.5K

QSRA

QSKA

QSCA

QSAB

Main Dryer

RM81

Alc

oves

Sector 81 (3.3 Km)

Tu

nn

elC

aver

nS

urf

ace

Shaft

QSDNRM

PAProfibus DP

WorldFIP

Return Module S78 & S81

12 June 2008 E. Hatziangeli AB/CO

Local & CentralControl Rooms

SCADA Data Servers

Page 11: Control Status Readiness Report

• The operation of cryogenics sectors has revealed a high risk dependency of the cryo control system on the reliability of the Technical Network

• Steps taken to reduce the dependencies– PLC architecture was rationalized no dependency on Ethernet of

the cryogenics control loops for production equipments– Architecture of the network components was optimized minimum

dependency on communications equipment (switches)– Powering of network component was checked homogenization

where possible with the cryo powering

• Work ongoing– Identify the weak network components and improve (fiber–copper)– Consolidate the restart of communication after a network failure– Ensure interventions on Technical Network (hardware & software) are

carefully planned and agreed with OperationE. Hatziangeli AB/CO 11

Cryogenic Controls Reliability

12 June 2008

Page 12: Control Status Readiness Report

12

Machine Interlocks

Powering Interlock System(PLC based)

Beam Interlock System(VME based)

Warm Magnet Interlock System(PLC based)

for Protecting Supra-ConductingMagnets

and Normal Conducting Magnets

for Protecting the Equipments

for Beam Operation

+

Safe Machine Parameters

system(VME based)

12 June 2008 E. Hatziangeli AB/CO

Fast Magnet current Change Monitors (FMCM)

Page 13: Control Status Readiness Report

Powering & Warm Magnets Interlocks

Powering Interlock Controllers• 36 units of PLC based system

protecting ~800 LHC electrical circuits• monitored via PVSS Supervision Operational and daily used during

HWC

13

Warm magnet Interlock Controllers• 8 units of PLC based system protecting

~150 LHC normal conducting magnets• monitored via PVSS Supervision Operational and daily used during HWC

12 June 2008 E. Hatziangeli AB/CO

Page 14: Control Status Readiness Report

14

Beam Interlock System

BIS will be ready for the machine checkout…

Individual System Tests successfully performed

Beam Interlock Controllers19 VME systems and~200 connections with most of the LHC systems

on going BIS Commissioning (involving all User systems)- done in // with HWC

- 3/8 points already performed

Monitored by Operational Application

12 June 2008 E. Hatziangeli AB/CO

Page 15: Control Status Readiness Report

(TT40 incident in 2004)

~110 cmVacuum chamber cut (outside view)

(inside view)

Ejected material opposite cut(inside view)

No marks or damageon magnet flanges

Beam

E. Hatziangeli AB/CO12 June 2008 15

Page 16: Control Status Readiness Report

Fast Magnet current Change Monitors - FMCM

• Successful collaboration with DESY– DESYdevelopment + CERN adaptation

• First units successfully used during the SPS Extraction tests and CNGS runs in 2007– currently being re-commissioned for 2008 runs

• Installation and commissioning in progress– 12 monitors deployed in the LHC (+ 14 in Transfer Lines), including ALL septa families– LHC installations to be completed next month (12 devices) 1st version of FESA class and Java supervision available since June 2007

• minor consolidation work in progress

E. Hatziangeli AB/CO16

FMCM trigger 0.1% drop !

time (ms)

I (A)

10 ms

FMCM triggers @ 3984.4 <103

PC current

time (ms)

I (A)

500 ms

View of FMCM board 12 June 2008

Page 17: Control Status Readiness Report

Timing System major components

• The LHC central timing– Master, Slave, Gateway using reflective memory, and

hot standby switch• The LHC Injector chain timing (CBCM)

– Master, Slave and Gateway using reflective memory, and hot standby switch

• Timing is distributed over dedicated network to timing receivers CTRx in front ends

• LHC and SPS safe machine parameter distribution

12 June 2008 E. Hatziangeli AB/CO 17

Page 18: Control Status Readiness Report

Safe Machine Parameters

• The SPS and LHC safe beam flags and beam energy are distributed on the LHC timing network – Work needs to be done for the final system to be ready

• The CTR timing receiver modules are able to distribute the beam energy and Safe Beam flags without any software to ensures higher reliability

• All timing receivers are monitored by DIAMON– Powerful diagnostics for 1000+ receivers

12 June 2008 18E. Hatziangeli AB/CO

Page 19: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 19E. Hatziangeli AB/CO

Page 20: Control Status Readiness Report

Post Mortem - Towards Beam Commissioning

… …

Data completeness and consistencycheck at system and global level (minimum data, configurable)

Upon beam dump / self triggering, systems start pushing data to PM system, Logging, Alarms, etc…

Individual System Analysis & Checks

I/XPOCIPOC-BISEvent Sequence

Circuit events

BLM, BPM > threshold

Global PM Analysis:Global Event sequence, summaries, advised actions, event DB,…

BLM BPM FGC QPS PIC/WIC BIS XPOCFMCM

Global event sequence Advised ActionsMachine Protection OK

• Validation of machine protection features • Pre-analysis of PM buffers into result files• Flagging of interesting systems/data reduction • Database catalogue

12 June 2008 20E. Hatziangeli AB/CO

Page 21: Control Status Readiness Report

Post Mortem Readiness

• Many tools are ready for HWC and tested on sectors 5-6 and 7-8• Implemented since December 2007

– Automatic Test Analysis on three applications (expert override possible)– Calculated result parameters sent to Sequencer for MTF upload.– Well defined GUI for each test step.– Test results electronically signed by role using RBAC– Event recognition with Event Builder– Redundant services for PM collection and data– Scalability tests with first beam clients started (BLM, BPM)

• To do for 2008– Parallel sector commissioning still to be tested– For the 600A circuits many steps are still to be automated.– Further validation tests with beam clients to be done (BLM, BPM, RF, etc…)– Extend framework from HWC to beam operation – Implement higher level of Automated Test Analysis

• Data completeness checks & Individual system tests

12 June 2008 21E. Hatziangeli AB/CO

Page 22: Control Status Readiness Report

Readiness of HWC sequencer

• First version of the sequencer deployed in early 2007– Many new versions with improvements deployed since

• HWC Sector 7-8 May-Jul 07 4-5 Winter 08, 5-6Spring 08

– ~ 35 sequences written and maintained by 3 HWC experts– Sector 4-5: over 1700 sequences executed, in 5-6 over 600– Essential tool for HWC

• Overall it works well and satisfies the requirements– Sequencer (the tool) is complete, no important new features needed – Sequences (the tests) are maintained by HWC experts

• Ready for multi-sector / multi-front HWC– Used in multi-front operations for over a year– Recent experience in multi-sector operations

• “normal” HWC is done in sector 78 • training quenches are done in sector 56

– No scalability issues are anticipated

E. Hatziangeli AB/CO 2212 June 2008

Page 23: Control Status Readiness Report

Logging Service Readiness

• Logging for Operation– Data logged from PS Complex, SPS, CNGS, LHC HWC, LHC , any type of

equipment– Processes run continuously on dedicated machines – Monitored through Alarms system LASER & DIAMON & diagnostic

application• Logging for equipment commissioning

– Dedicated service, running in the environment of the specialist– Aim: validation of the equipment behavior before operational

deployment– No interference with Operational logging

• Requirement for a watchdog system (coming weeks) – For critical data (INB, CNGS neutrino events, ...) continuous monitoring

of data logged in DB, generation of a specific alarm

12 June 2008 E. Hatziangeli AB/CO 23

Page 24: Control Status Readiness Report

Software Interlock System - SIS Overview

• Very useful system to anticipate failures and gives early alarms

• Accommodates complex interlock logic• Complements BIS (hardware) as protection

system• Proved to be reliable tool for operations • Excellent experience in SPS (900 parameters

monitored)

12 June 2008 E. Hatziangeli AB/CO 24

Page 25: Control Status Readiness Report

SIS for LHC

• Gives 2 Permits for Injection BICs(Beam1 & Beam2)– All PCs not HW interlocked (~ 800, orbit correctors, warm magnets)– Current of separation dipoles and MCBX orbit correctors– Ring & injection screens (only IN when mode inject-dump)– Extraction screens– Circulating beam intensity limit

• Gives Permit for the LHC ring (dumps the beam - initially alarms)– Integrated field of orbit correctors (beam dump energy tracking)– Extraction screens combined with intensity + energy– Orbit at TCDQ

• Future work – RBAC integration – Critical settings monitoring (MCS) from LSA– Refinement of the configuration as we progress with LHC

12 June 2008 25E. Hatziangeli AB/CO

Page 26: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 26E. Hatziangeli AB/CO

Page 27: Control Status Readiness Report

Role Based Access (RBAC) Overview

Application RBAC

RBAC Token:

• Application name

• User name

• IP address/location

• Time of authentication

• Time of expiry

• Roles[ ]

• Digital signature (RBA private key)

CMW client

FESA

CMW serverAccess MAP

T

T

T

Application Server

Configuration DB

Authentication:– User requests to be authenticated – RBAC authenticates user via NICE user

name and password– RBA returns Tokento Application

Authorization:– Application sends token to Application

Server (3-tier env.) – CMW client sends token to CMW server– CMW server (on front-end) verifies token– CMW server checks AccessMapfor role,

location, application, mode

12 June 2008 E. Hatziangeli AB/CO 27

Need to prevent– Well meaning person from doing the wrong

thing at the wrong moment– Ignorant person from doing anything at any

moment

Page 28: Control Status Readiness Report

Management of Critical Settings - MCS

Need to ensure• Critical parameters, which can compromise the safety of the

machine– can only be changed by an authorized person and nobody else– are what they are supposed to be

• MCS ensures – Critical parameters are only changed by authorized person

• RBAC for Authentication & Authorization

– It signs the data with a unique signature to ensure critical parameters have not changed since the authorized person has updated it

• Public-private key digital signatures12 June 2008 E. Hatziangeli AB/CO 28

Page 29: Control Status Readiness Report

LHC Controls Security Panel - LCSP

• The LHC Controls Security Panel is mandated to address all the technical and non-technical issues concerning AB security for Controls– Take responsibility for the RBAC data (ROLES and

RULES)• Ensure all critical parts of the machine are protected

– Take responsibility for the CNIC actions• reduction of Trusted list, change of operational account

passwords,..

CNIC: Computing and Network Infrastructure for Controls

12 June 2008 29E. Hatziangeli AB/CO

Page 30: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 30E. Hatziangeli AB/CO

See next talk

Page 31: Control Status Readiness Report

LHC controls infrastructure Scalability• Tests =>Preliminary results, issues & foreseen solutions• Systems which scale to LHC full load

– Software Interlock System SIS– Data Concentrators (BLMs, BPMs)– Alarm system LASER (new architecture) – Controls Middle Ware (CMW) /Java API Parameter Control (JAPC)– Diagnostic & Monitoring tool DIAMON

• Systems potentially critical (tests ongoing- results mid June) – Post Mortem– Logging service

• Scalable to LHC load from all clients except LHC BLM• Preliminary limit : ~ 5000 parameters/second• Bottleneck : SQL calls management by Oracle Server

12 June 2008 E. Hatziangeli AB/CO 31

Page 32: Control Status Readiness Report

Failures in Central Timing

• Tests have been performed to validate the behaviour of the Controls Infrastructure when the Central Timing crashes

• These “crash” timing tests are on going• Results

– The behaviour of the control system with no timing is correct

– The application programs, servers and front ends recovered without manual intervention when timing returns

12 June 2008 32E. Hatziangeli AB/CO

Page 33: Control Status Readiness Report

RBAC Dry Runs

• The LHC Controls Security Panel (LCSP) is preparing an RBAC dry-run end June/early July

• The RBAC default behavior is changed to– “Access with no RBAC token is refused”– Property not protected is not authorized

• All Equipment servers will be loaded with RBAC access maps• Typical applications will be tested

– LHC 2-tier & 3-tier applications– LHC Core controls (LSA)– Background servers, concentrators– Fixed Displays

12 June 2008 E. Hatziangeli AB/CO 33

Page 34: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 34E. Hatziangeli AB/CO

Page 35: Control Status Readiness Report

Alarms (LASER) for LHC

An important increase in expected alarm events Required availability 365 days/24h

12 June 2008 E. Hatziangeli AB/CO 35

Alarm console extended Allows for dynamic grouping of

alarms New alarm definition database

schema Ensures the data quality by

reducing redundancy and protecting against incomplete data

Alarm server modified fundamentally to allow

Fast response to increase in load Increased resilience to external

failures and improved diagnostics tools

LASER console

LASER CORE

LASER source

LASER source

LASER source

LASER DB

Page 36: Control Status Readiness Report

DIAgnostic & MONitoring System - DIAMON

• DIAMON provides– Software infrastructure for monitoring the AB Controls

Infrastructure

12 June 2008 E. Hatziangeli AB/CO 36

– Easy to use first line diagnostics and tool to solve problems or help to decide about responsibilities for first line intervention

Navigation Tree

Group View

Monitoring Tests Details Repair tools

Page 37: Control Status Readiness Report

Outline

• LHC controls infrastructure – overview

• Status Report on Core Controls– Front ends Hardware and Software – Databases– Industrial Controls– Machine Interlocks– The LHC timing system

• Core Services and Applications– Post mortem– Sequencer for HWC– See next talk for Beam Sequencer– Logging– Software Interlock System– LSA (see next talk)

• Controls Security– Role Based Access– Management of Critical Settings– LHC Controls Security Panel

• Controls Infrastructure Tests– Deployment on LEIR, SPS, LHC TL– Dry runs - Commissioning– Scalability Tests– LHC Timing Crash Tests– RBAC tests

• Monitor and Diagnostics– LASER– DIAMON

• Injector Renovation• Summary

12 June 2008 37E. Hatziangeli AB/CO

Page 38: Control Status Readiness Report

Injector Controls Renovation - Status Report

Injector Controls Architecture - InCA• Architecture validation with critical Use Cases• Check interfacing of the various components

– LSA core– Standard CO components for Acquisition– Standard PS and LSA Applications interfaced to the core

• Check data flow– Low-level trim and monitoring values of correctors– Orbit correction using LHC Beam steering application– High-level trim + drive a front end

• Results– Whole data flow validated– Architecture closer to the final one

Injector Complex FE Renovation – 2nd half of 2008• A "strategic" plan for the renovation of the FE controls infrastructure is

due by mid-2008 • Development and validation of new Front-end solutions in view of their

first deployment in 200912 June 2008 38E. Hatziangeli AB/CO

Page 39: Control Status Readiness Report

Summary

• The LHC controls infrastructure had been targeted for readiness for an engineering run at 450 GeV in November 2007 - This goal has been met.

• The ongoing hardware commissioning and the extensive use of programs and databases (“learn by doing”) have significantly changed the specifications and the resulting follow-up and work has been done.

• Additional functionality has been prepared in 2008- network security (RBAC)- diagnostic tools (DIAMON)

12 June 2008 39E. Hatziangeli AB/CO

Page 40: Control Status Readiness Report

End

12 June 2008 40E. Hatziangeli AB/CO