data protection in wrigley’s highly available mission-critical environment
DESCRIPTION
Attend this session and hear how Wm. Wrigley Jr. Company has designed and is running HP Data Protector in a highly available mission-critical environment. The presenter will tell you how Wrigley designed Data Protector Zero Downtime backups for SAP utilizing the SAP BR tools and the HP StorageWorks XP array. You’ll also hear about the use of tape virtualization and the advantages it presented for backing up SAP archive and redo logs. You’ll see how virtualization made it possible to schedule all backups during non-core hours to reduce system impact, and how Wrigley implemented a three-cell Data Protector environment in a Service Guard environment. The session will conclude with a discussion of plans for future implementation in a dual-data-center model.TRANSCRIPT
1 ©2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice
Data protection in Wrigley’s highly available mission-critical environment
James OdakSr. Global Systems Engineer, Mars IS US, LLC
©2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice
Technologies referenced in this presentation:
HP Data ProtectorHP ServiceguardHP XP RAID ManagerSAP/Oracle DatabasesVirtual & Physical Tape LibrariesHPUX OSWindows OS
3
My Background
• Employed by Wrigley since Jan 2002
• Mars/Wrigley Merger 2009
• Previous Employment Since 1995
– Solo Cup
– Montgomery Wards
– Ben Franklin Stores
4
My Background – Cont.
• Technologies
– UNIX Server administration
– HPUX 9.X – 11.31
– SCO, AIX, SINIX, SLES, AT&T Unix
– SAN Administration
– XP512-XP12000, EVA5000-EVA8100 + Others
– Brocade Fabric Switches 1GB – 4GB
– Data Protector/OmniBack 3.x – 6.1
– Tape Drives, Libraries and VTL
5
Wrigley’s DC Background
• 2000/2001 – Wrigley - New Data Center (GDC)
– To support global instance of SAP
– ~20 PA-Risc HPUX Servers running HPUX 11.11
– ~5 Windows OS Servers running Windows 2000
– XP512 Disk Array – 6 TB
– OmniBack 4.0
– LTO1 & 9840 Tape Library
– Tape area SAN Network
6
OmniBack Details
• Single Cell Server on HPUX
• ~30 Clients
• SAP Integration – 2 Major DB Instances
– Online
– Offline
– Archive/Redo Logs
– Split Mirror Backups
• 8 LTO1 Drives & 2 9840 Drives
7
Growth 2001 - 2005
• OmniBack 4.0 4.1 5.1
– 200+ Clients
• Server Growth
– 50+ PA-Risc HPUX Servers
– 100+ Windows Servers (P-Class Blades)
– Windows 2000 & 2003
• Storage Growth
– XP12000 + EVA5000 – 100 TB
8
Growth 2001 – 2005 – Cont.
• Tape Library Growth
– 14 LTO2 and 4 9840 Drives
• SAP Growth
– 5 Major DB instances
• Disaster Recovery Added
– Second Data Center
9
Pain Points
• Single DP Cell with 200+ Clients inefficient
• Backup Window – 24/7
• Many physical Tape Library Failures
• SAP Generating Archive logs faster then can be backed up
• DP Cell Server Hardware – Single Point of Failure
10
Pain Points – Cont.
• Recovery Times Unacceptable
• Backup Success Rate – Less than 70%
• Many Sleepless Nights – working on backup issues
• Overall Image of backup reliability was poor
– Many projects listed risks of back failure and recovery windows delaying the project.
11
Data Center Refresh - 2007
• Replaced all PA-Risc Servers
– Integrity Servers running HPUX 11.23
– Boot From SAN
• SAN Upgrades – 150 TB
– Added EVA4100 and EVA8100
– Upgraded to 4 GB SAN Fabric (Switches & HBAs)
• Tape Technology Refresh
– VTL – 100TB
– LTO3 – MSL8096
12
Data Protector Refresh - 2007
• Data Protector 6.0 w/Manager of Mangers
• 3 DP Cell Servers – 100 Clients or less per Cell
• DP Cells in a Highly Available Configuration
– Each Cell running as a HP Service Guard Package
– 3 Packages running in a 4 node cluster
• Virtual Tape Library
• All backups directly to VTL over a 4GB San Fabric
• Background copy of critical Production backups to physical tape
13
Results
• DP Cell performance much more efficient
• Backup Window – 5 PM – 8 AM
• Little or no Backup Failures due to VTL
• Physical Library failures have no impact.
• SAP Archive log backups in seconds
• Multi-node cluster provides failover for DP Cells in the event of Server failure.
14
Results – Cont.
• Recovery times reduced to acceptable levels
• Backup Success Rate 99%+
• No more sleepless nights!!!!!!
• Image of backup and recovery reliability is much higher
– Projects now utilize backups and recovery in their plans rather than fear it, or list it as a risk.
15
Roadblocks and Solutions
• 4 Node Cluster for Backups Costly
– Utilize cluster for other management apps
• Virtual Tape Drive Sprawl & DP Drive License
– Advance backup to disk options in Data Protector
• How to failover DP Cell Libraries?
– Create uniform device files for Drives/Arms
– Use option to discovery changed SCSI address
– 11.31 Agile Path (future)
• Expired Tapes on VTL
16
Future
• Backing up Virtual Guests
– VMWare & Integrity Virtual Machines
• Dual Data Center Model
– Failing a DP Cell Package from data center to data center
• Deduplication
17
Data Protector Maintenance
• Keeping the DP Cell database clean is key to a good performing cell
– Cleaning up session history and decommissioned clients
– Keep your session history clean and remove any old clients from the cell
– Routine Purges
– omnidbutil –purge –filenames/session -force
– Routine read/write database processes
– omnidbutil –writedb/readdb
– Keep up to date with patches
– New patches every three months from HP
19 ©2010 Hewlett-Packard Development Company, L.P.
To learn more on this topic, and to connect with your peers after
the conference, visit the HP Software Solutions Community:
www.hp.com/go/swcommunity
20