ral status and plans

23
1 RAL Status and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, CERN, 26-27 November 2009

Upload: berg

Post on 12-Jan-2016

53 views

Category:

Documents


0 download

DESCRIPTION

3D Workshop, CERN, 26-27 November 2009. RAL Status and Plans. Carmine Cioffi Database Administrator and Developer. OUTLINE. 3D Database configuration and HW spec Storage configuration and HW spec Future plans CASTOR Database configuration and HW spec - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: RAL Status and Plans

1

RAL Status and Plans

Carmine CioffiDatabase Administrator and Developer

3D Workshop, CERN,

26-27 November 2009

Page 2: RAL Status and Plans

2

OUTLINE

• 3D– Database configuration and HW spec– Storage configuration and HW spec– Future plans

• CASTOR– Database configuration and HW spec– Storage configuration and HW spec– Schemas Size and Versions– Future plans

• Backup configuration

Page 3: RAL Status and Plans

3

3D: Database Configuration and HW Spec

• 3 nodes RAC for ATLAS (Ogma)– Red hat 4.8– 64 bit– 2 Quad Core Xeon(R) E5410 @ 2.33GHz– 16 GB

• 2 nodes RAC for LHCb (Lugh)– Red Hat 4.8– 64 bit– 2 Dual-Core AMD Opteron(tm) 2216– 16 GB

3

Page 4: RAL Status and Plans

4

3D: Database Configuration and HW Spec

• For both RAC Ogma and Lugh:– Oracle 10.2.0.4– Single OCR– Single Voting Disk

4

Page 5: RAL Status and Plans

5

3D: Storage Configuration and HW Spec

• Single disk array shared by both databases (Ogma, Lugh):– Storage (SAN, 2GBps FC):

• Ogma:~1/2 TB• Pluto: ~100GB

– Single switch SANBOX 5200 2Gb/s– 16 disks SATA 260GB– Configured with RAID10

5

Page 6: RAL Status and Plans

6

3D: Storage Configuration and HW Spec

• ASM:– Ogma (ATLAS):

• Normal redundancy• Single disk group• Two failure groups• One disk (512G) per failure group

– Lugh (LHCb):• Normal redundancy• Single disk group• Two failure groups• One disk (512G) per failure group

6

Page 7: RAL Status and Plans

7

3D: Database diagram

7

OGMA LUGH

FC switchSANBOX 5200 2Gb/s

OGMA (1/2TB / 1TB

GB)

LUGH (100GB / 1/2TB)

SAN

Page 8: RAL Status and Plans

8

3D Future Plans: DB Configuration and HW Spec

• There will be no changes on:– Number of nodes per RAC– Hardware spes – Oracle version

• Deploy on both RAC Ogma and Lugh:– Two OCRs– Three Voting Disks

8

Page 9: RAL Status and Plans

9

3D Future Plans: DB Configuration and HW Spec

• Two disk arrays shared by both databases (Ogma, Lugh):– Storage: SAN 4GBps FC – Physical disk available:

• Array 1: 16 disks SATA 260GB• Array 2: 6 disks SATA 550GB

– Arrays with RAID5 configuration

• Two switches:• SANBOX 5200 2Gb/s• SANBOX 5602 4Gb/s

9

Page 10: RAL Status and Plans

10

3D Future Plans:Storage Configuration and Spec

• ASM:– Ogma (ATLAS):

• Normal redundancy• Single disk group two failure groups• Two or more disks per failure group

– Lugh(LHCb):• Normal redundancy• Single disk group two failure groups• One or more disks per failure group

10

Page 11: RAL Status and Plans

11

3D: Database Diagram

11

OGMA LUGH

FC switch 1SANBOX 5200 2Gb/s

Disk array 1

LUGH

OGMA

FC switch 2SANBOX 5602 4Gb/s

Disk array 2mirror

LUGH

OGMAASM mirroring

Page 12: RAL Status and Plans

12

Castor: Database Configuration and HW Spec

• 2 5-nodes RAC (Pluto, Neptune) + one single instance (Uranus)– Red hat 4.8– 32 bit– Dual Quad Core (Intel Xeon 3Ghz)– 4 GB

• Oracle 10.2.0.4• Single OCR• Single Voting Disk

12

Page 13: RAL Status and Plans

13

Castor: Storage Configuration and HW Spec

• Single disk array used by the two RACs:• Storage:

– Pluto:~200GB– Neptune:~220GB– Single instance: 624GB

• Overland 1200 disk array– Twin controller– Twin Fibre Channel ports to each controller– 10 SAS disk (300GB each 3TB total gross space)– Raid 1(1.5 TB net space)

• Two Brocade 200E 4Gbit switched13

Page 14: RAL Status and Plans

14

Castor: Storage Configuration and HW Spec

• ASM (Pluto, Neptune):• Normal redundancy• Single disk group• Two failure groups• One disk (512G) per failure group

14

Page 15: RAL Status and Plans

15

Database Overview

15

Neptune Pluto

Brocade 200E Brocade 200E

Uranus

SCSI attached disk array

(624GB / 1.8TB

Pluto(200GB / 1/2TB)

Neptune(220GB / ½ TB)

Overland 1200

Page 16: RAL Status and Plans

16

Castor Future plans: DB Configuration and HW Spec

• There will be no changes on the number of node per RAC, the hardware or Oracle version

• Deploy on both RAC Pluto and Neptune:– Two OCRs– Three Voting Disks

16

Page 17: RAL Status and Plans

17

Castor Future planStorage Configuration and HW Spec

• Two disk arrays shared by both databases (Neptune, Pluto):– Storage: EMC Clarion– Physical disk available:

• SAS 300GB Drives• 2TB gross

– RAID5 configuration

• Two Brocade 200E 4Gbit switched

17

Page 18: RAL Status and Plans

18

Castor Future planStorage Configuration and Spec

• ASM (Pluto, Neptune):• Normal redundancy• Single disk group two failure groups• One or more disks per failure group

18

Page 19: RAL Status and Plans

19

Castor: Schemas Size and Versions

19

Schemas Version Size

Name Server n/a 1.8GB

VMGR n/a 1.7MB

CUPV n/a 0.2MB

CMS Stager 2_1_7_27_1

1.9GB

Gen Stager 2_1_7_27_1

3.8GB

Repack_219 2_1_9_1 17MB

Repack 2_1_7_27 62MB

Gen SRM 2_8_2 540MB

SRM CMS 2_8_2 1.1GB

VDQM2 2_1_8_3_1 5MB

Pluto

Schemas Version Size

Atlas Stager 2_1_7_27_1 18GB

LHCb stager 2_1_7_27_1 1.8GB

SRM Atlas 2_8_2 5.1GB

SRM LHCb 2_8_2 1.2GB

Neptune

Page 20: RAL Status and Plans

20

Backup configuration

• Incremental 0 once a week• Incremental 1 the other days of the week• All backups are followed by logical validation• Archived log backup done during the day (for

now)• Once we move to the new hardware the archived

log will be multiplexed on a shared disk outside ASM

• Backup are stored on the local disk• Backup are copied from the local disk to tape and

kept for three months

Page 21: RAL Status and Plans

21

Backup configuration

• RMAN configuration parameters are:– CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS;– CONFIGURE BACKUP OPTIMIZATION ON;– CONFIGURE DEFAULT DEVICE TYPE TO DISK;– CONFIGURE CONTROLFILE AUTOBACKUP ON;– CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE

DISK TO '/oracle_backup/pluto/%F.bak';– CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO

BACKUPSET;– CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;– CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;– CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT

'/oracle_backup/pluto/pluto_%U.bak';– CONFIGURE MAXSETSIZE TO UNLIMITED;– CONFIGURE ENCRYPTION FOR DATABASE OFF;– CONFIGURE ENCRYPTION ALGORITHM 'AES128';– CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;– CONFIGURE SNAPSHOT CONTROLFILE NAME TO

'/opt/oracle/app/oracle/product/10/db_1/dbs/snapcf_pluto1.f'; # default

Page 22: RAL Status and Plans

22

Backup configuration

• Incremental 0:– backup incremental level 0 duration 12:00 database;– backup archivelog all delete all input;– report obsolete;– delete noprompt obsolete;

• Incremental 1:– backup incremental level 1 duration 12:00 minimize

time database;– backup archivelog all delete all input;

• Validation:– restore validate check logical database archivelog all;

Page 23: RAL Status and Plans

23

ANY QUESTIONS?