a case study in upgrading oracle applications and migrating to linux jeff slavitz oracle...
TRANSCRIPT
A Case Study in Upgrading Oracle Applications and Migrating to Linux
Jeff SlavitzOracle Applications DBA
OverviewLast September Camelbak upgraded from 11.5.4
to 11.5.10 and migrated database and application from Solaris to Linux.
This presentation will focus on migration and not on the 11.5.10 upgrade (we only have an hour)
Goal is to give DBAs a road map of the documents and steps to migrate database and application
Discuss migration problems and lessons learnedIT Manager will review business issues related to
the Big Bang
Jeff Slavitz - NorCal OAUG Training Day 2008 2
Camelbak and Oracle Apps100 gig databaseFinancial applications (GL, AP, PO, FA)Order Entry, Inventory, Shipping, WIPCustom materialized views used as data
warehouseApproximately 300 employees in Petaluma,
San Diego and the Phillipines
3Jeff Slavitz - NorCal OAUG Training Day 2008
Technical IssuesRunning 11.5.4 / 8.1.7 / Solaris since 2002Database, application and operating system
unchanged since thenExtremely poor performance – Nightly
materialized view refresh takes 8 hours, user complaints
Difficulty getting patches from SupportWant to upgrade to 11.5.10 and migrate to LinuxCustomer wants to do a Big Bang, not a big
explosion
4Jeff Slavitz - NorCal OAUG Training Day 2008
Business IssuesJeannine Sarragossa, Camelbak IT
Manager
Applications upgrade issuesHardware Migration:
ConstraintsCostLicensing impacts
Big Bang vs. a stepped project for each component
Jeff Slavitz - NorCal OAUG Training Day 2008 5
Old HardwareDatabase and concurrent manager server
Sun E3500 running Solaris4 processors4 gig memory
Forms serverSun E3500 running Solaris2 processors2 gig memory
6Jeff Slavitz - NorCal OAUG Training Day 2008
New HardwareDatabase server
Dell PowerEdge 2950 running Red Hat Linux 64 bitTwo Dual Core 2.33 ghz processors16 gb memory
Application serverDell PowerEdge 2950 running Red Hat Linux 32 bitTwo Dual Core 2.33 ghz processors16 gb memory
Note to self: Buy as much horsepower as you can budget. Buy more disk space than you think you will need.
7Jeff Slavitz - NorCal OAUG Training Day 2008
The Big Bang OverviewProcess took 8 months from initial planning to Go LiveFour full CRP runs to test the master task listSeveral mini-CRP runs to test which
upgrade/migration path was fastest:Experimented with staged APPL_TOPTested upgrade and migration time on different
machinesGo Live downtime estimate reduced from 14 to 7 daysNormally would have pre-applied as many patches to
PROD as possible, but didn’t due to system age and testing requirements
Jeff Slavitz - NorCal OAUG Training Day 2008 8
The Big Bang OverviewFirst step in upgrade/migration was to clone PROD
to loaner Solaris system. This machine was faster, cloned from two tier to one. This allowed original PROD to be used for fallback system.
Set all PROD tablespaces to read-only so company could run reports during upgrade downtime.
While on 11.5.4 upgraded database from 8.1.7 to 10.1. Skipped 9i; 10.2 not supported with 11.5.4.
Upgrade application from 11.5.4 to 11.5.10 on Solaris; 11.5.4 not supported on Linux.
Upgrade database from 10.1 to 10.2 on Solaris. Datapump faster in 10.2
Migrate database and application to Linux
Jeff Slavitz - NorCal OAUG Training Day 2008 9
Database Migration RequirementsOn OTN refer to installation guide for
your platform, e.g. 10GR2 Linux x86-64 Install Guide
VERY important to install all operating system patches and packages. You may need many RPMs to get all of the required kernel packages and versions.
Need to set kernel parameters at least as high as indicated in installation guide
Jeff Slavitz - NorCal OAUG Training Day 2008 10
App Migration RequirementsRead note 316806.1 , Oracle Applications
Installation Update Notes Release 11i (11.5.10.2) for Linux x86 (or appropriate for your operating system)
Apply all RPMs indicated to get proper packagesNeed gcc and g++ version 3.2.3. To find your
version type: gcc –v (your version likely much higher). For Red Hat apply patch 4198954 to get the RPMs for the older version of gcc and g++.
You will also need: JDK 1.3.1 or 1.4.2AD.I.5Autoconfig enabled
Jeff Slavitz - NorCal OAUG Training Day 2008 11
Migration ThoughtsConsider upgrading source to highest RDBMS
supported BEFORE migration to get improved DataPump performance and cool new features, e.g. attach and detach from running jobs
9i export/import slow10GR1 DataPump 2 x faster than 9i10GR2 DataPump 3 x faster than 9iNeed to trade off expdp/impdp performance over
time to upgrade if source machine is slower than target
10GR2 100gb export on Solaris – 7 hours10GR2 100gb import on Linux – 5 hoursYour mpg will vary based on your hardware
configurationJeff Slavitz - NorCal OAUG Training Day 2008 12
Migration ThoughtsIf using ssh setup trusted FTP access
between all computers involved in the upgrade:Run "ssh-keygen -t rsa1" on sourceCopy source $HOME/.ssh/identity.pub to target
$HOME/.ssh/authorized_keysRun "ssh-keygen -t dsa" on sourceConcatenate source $HOME/.ssh/id_dsa.pub to
target $HOME/.ssh/authorized_keys
Jeff Slavitz - NorCal OAUG Training Day 2008 13
Database MigrationMany documents - 3 binders, 25 page master task list
and notes - organization is importantMain document - Note 362205.1 – Export/Import
Process for 11i Using 10GR2. The major steps in the process are:1. Clear existing network topology (application’s
understanding of which nodes belong to application)
Deregister source database server as shown in note 362203.1 steps 1-4.
Easier way is to clear entire network topology using FND_NET_SERVICES.remove_server as shown in note 218089.1, Autoconfig FAQ
Create new topology by running autoconfig first on RDBMS and then on application.
Jeff Slavitz - NorCal OAUG Training Day 2008 14
Database Migration2. Apply patches to the source system.3. Minimize invalid objects. Run
$ORACLE_HOME/rdbms/admin/utlirp to invalidate all database objects. Then run utlrp to compile all database objects.
4. Run scripts to extract settings, mainly advanced queue settings which aren’t recreated by the import process.
Jeff Slavitz - NorCal OAUG Training Day 2008 15
Database Migration 6. Use expdp to export source. Export log may
show warning messages for export of application triggers. Apply database patch 5459871 on source and target to prevent this problem.
7. Time saver - Prepare target while export is running. Install 10GR2 as shown in note 362203.1 steps 6-12. Create target database and schemas and prepare for import.
8. Time saver - Backup target before import!
Jeff Slavitz - NorCal OAUG Training Day 2008 16
Database Migration9. Use impdp to import export file into target.
Many messages in logfile.10.Perform post-import updates to setup
advance queues, create context and spatial objects, and setup database context file.
Jeff Slavitz - NorCal OAUG Training Day 2008 17
Split ConfigurationDuring test upgrades can bring up the
application at this point (application on old hardware, database on new hardware) to confirm database migration was successful.
In application context file change s_dbhost, s_dbdomain and s_dbport to reflect new database configuration.
Remove old database tier or clear network topology if you haven’t already
Run autoconfig and start application.See note 369693.1 for more details.
Jeff Slavitz - NorCal OAUG Training Day 2008 18
Migrating ApplicationMuch easier than database migrationMain document - Note 238276.1 – Migrating
to Linux with Oracle Applications Release 11. The major steps in the process are:Enable autoconfig in source if it isn’t alreadyCopy APPL_TOP, OA_HTML, OA_JAVA,
COMMON_TOP/util, COMMON_TOP/pages to target
Run adgenpsf.pl to generate manifest of customer specific files. Upload file to Oracle
Within 30 minutes a download patch file is ready. This patch contains Linux library files.
Jeff Slavitz - NorCal OAUG Training Day 2008 19
Migrating ApplicationCommon problem with manifest upload - bad header:
Release: 11.5.10.2
Correct it to read: Release: 11.5.10.2Run clone context tool, adclonectx.pl, to generate
application context file on target.Run rapidinstall –techstack to install iAS technology
stack.Apply interoperability patchesRun autoconfigApply customer specific patch downloaded aboveRegenerate forms, reports, etc.Start the application
Jeff Slavitz - NorCal OAUG Training Day 2008 20
Post-Migration IssueRecompile All Database ObjectsPost-Go-Live-P1 : When querying orders users received
this error:ORA-4068: existing state of packages () has been discardedORA-4065: not executed, altered or dropped stored procedureAPPS.OE_HEAEDER_ADJ_UTILORA-6508: PL/SQL: could not find program unit being calledAPPS.OE_HEAEDER_ADJ_UTILORA-6512: at APPS.OE_OE_TOTALS_SUMMARY, line 18ORA-6512: at APPS.OE_OE_TOTALS_SUMMARY, line 486
To avoid this, after upgrade and migration complete, run $ORACLE_HOME/rdbms/admin/utlirp to invalidate all database objects then utlrp to re-compile all objects.
Jeff Slavitz - NorCal OAUG Training Day 2008 21
Post-Migration IssueConfirm Concurrent Mgr NodesAfter application migration concurrent
manager administration screen may show nodes from source system. This causes concurrent manager to error on startup.
Jeff Slavitz - NorCal OAUG Training Day 2008 22
Confirm Concurrent Mgr Nodes
Jeff Slavitz - NorCal OAUG Training Day 2008 23
Confirm Concurrent Mgr NodesTo change nodes to point to target
system:As apps, exec
fnd_conc_clone.setup_cleanRun autoconfig on database then
application node
Jeff Slavitz - NorCal OAUG Training Day 2008 24
Post Migration IssueRMAN BugBug 5247609 - With Oracle 10G RMAN has
severe performance degradation when using cost based optimizer.
Switch to rule based optimization at the start of all RMAN scripts:sql “alter session set optimizer_mode=RULE”;
Jeff Slavitz - NorCal OAUG Training Day 2008 25
RMAN BugSet to RBO before registering database in RMAN
or you will get errors like this: Starting full resync of recovery catalog
RMAN-03014: implicit resync of recovery catalog failedRMAN-03009: failure of full resync ORA-01652: unable to extend temp segment by in
tablespace
Causes temp segment in SYSTEM tablespace to grow to 50+ gb
If your database is already registered in RMAN you will need to unregister and then re-register database
Jeff Slavitz - NorCal OAUG Training Day 2008 26
Post-Migration IdeaUse an SPFILEUse spfile instead of init.ora to start database
Allows you to maintain database initialization parameters across instance shutdown and startup
Allows you to dynamically change system parameters, e.g. alter system set xxx scope=[spfile,memory,both]
To create initial spfile, export init.ora to spfile:cd $ORACLE_HOME/dbscreate spfile='spfilePROD.ora' from
pfile='initPROD.ora'; Restart database – spfile will be used automaticallyUNLESS …
Jeff Slavitz - NorCal OAUG Training Day 2008 27
Using an SPFILE with Oracle AppsThe autoconfig generated addbctl.sh used to
start the database is hardcoded to startup using an init.ora in adstrtdb.sql. To change to spfile:Change autoconfig template:
$ORACLE_HOME/appsutil/template/adstrtdb.sqlOR, since autoconfig creates init.ora but never
overwrites it change your init.ora to:spfile=$ORACLE_HOME/dbs/spfilePROD.ora
Periodically copy spfile to init.ora as a backup.create pfile='initDB1test.ora' from
spfile='spfileDB1.ora';
Jeff Slavitz - NorCal OAUG Training Day 2008 28
Using an SPFILE with Oracle AppsIf you want to recreate SPFILE as part of
running autoconfig:Delete init.oraRun autoconfig to create new init.oraShutdown database and restart using new
init.oraCreate new spfile from new init.oraShutdown databaseReplace init.ora with custom init.ora that
specifies spfile (or change adstrtdb.sql template)Restart database using new spfile
Note 249664.1 - Pfile vs SPfileJeff Slavitz - NorCal OAUG Training Day 2008 29
PerformanceBefore AfterDatabase server
frequently running at 100% capacity during business hours
Users frequently complain about slowness
Custom materialized view refresh took 8 hours to complete
Database server load rarely above 25% even at peak times
Users never complain about performance
Custom materialized view refresh takes 5 minutes to complete
Jeff Slavitz - NorCal OAUG Training Day 2008 30
Lessons LearnedRequired a great deal of organization with documentsBuy more CPUs and memory than you think you needBuy more disk space than you think you can useDisk drive failed during migration – have spare
hardware on-siteMuch of the work was done remotely. At one point my
PC at work was unplugged. Consider using console with remote console software
Use more than one DBA. Have all DBAs practice entire master task list so they can be interchangeable.
Jeff Slavitz - NorCal OAUG Training Day 2008 31
Lessons LearnedAs a DBA I was concerned about the Big BangThe business dictates the timing of the project.Alternatives would have been to upgrade
now/migrate later, migrate database now/application later …
The cost to the business of not doing the Big Bang was more downtime, more user testing of each stage and delay of final implementation.
The more you are going to do at one time the more you need a strong technical team.
Jeff Slavitz - NorCal OAUG Training Day 2008 32