RE: Migrating 9i to 10g performance issues

  • From: "Juan Miranda" <j.miranda@xxxxxxxxxxxx>
  • To: <sbecker6925@xxxxxxxxx>
  • Date: Tue, 25 Mar 2008 16:14:03 +0100

 
Hello. Hope this helps you.
 
(Target server has less memory than source ?.)
 
 
You must determine the time to move the biggest table with export/import.
Other tables can be imported simultaneausly.
 
- No archivelog, of course.
- Reduce checkpoints:  Very Very big redolog files (5GB), check
log_checkpoint_interval, etc.
- Lots of memory assigned to PGA (>1GB) to recreate indexes.
- Import es slow, you must do several imports in parallel !
 
- Don´t create indexes during import.
Download ddlWizard and generate all the DLL. These tool will give you a
script to generate all indexes.
Split into 2 pieces and execute these 2 pieces at same time.
Change parallelism for very big index.
 
- Don´t analyze during import. You will do it later in parallel.
 
 
You can do export to a pipe and import from a pipe SIMULTANEUSLY.
This script do it and execute 4 export/import
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++
##!/bin/bash -x
# 
 
for tb in `cat tablas_restores.txt`
do
 
echo "Lanzo import en LOCAL: " $tb
echo mkfifo /oracle10/tmp/$tb.pipeimp
echo imp file=/tmp/$tb.pipeimp log=/tmp/imp_$tb.log parfile=/tmp/imp.par
 

echo "Lanzo export en REMOTO: " $tb
rsh -l oracle10 nodo1 ". /oracle10/.bash_profile ; mkfifo
/oracle10/tmp/$tb.pipeexp ; exp br/br tables=$tb
file=/oracle10/tmp/$tb.pipeexp owner=br & rsh -l oracle10 nodo2  'cat >
/oracle10/tmp/$tb.pipeimp' < /oracle10/tmp/$tb.pipeexp" &
 
echo -----------------------------------------------------
 
echo "Compruebo numero de imports lanzados"
ps aux |grep imp |grep -v grep  > /tmp/import.txt
imports=`wc -l /tmp/import.txt |awk '{print $1}'`
echo "numero de import ejecutandose: " $imports
 
#more than 4? wait.
while  [ $imports -ge 4 ];
do
  echo imports $imports mayor que 4 .... wait
  sleep 3
  ps aux |grep imp |grep -v grep  > /tmp/import.txt
  imports=`wc -l /tmp/import.txt |awk '{print $1}'`
  echo $imports
done
 
done
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
----- exp.par
USERID=system/pass
direct=y
buffer=50000000
recordlength=65535
indexes=n
constraints=n
statistics=none
grants=n

----- imp.par
USERID=system/pass
statistics=none
fromuser=USER1
touser=USER1
buffer=50000000
constraints=n
grants=n
indexes=n
ignore=Y
 
 
 
greetings.

 
 
 
 
 


  _____  

De: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] En
nombre de Sandra Becker
Enviado el: martes, 25 de marzo de 2008 12:57
Para: oracle-l
Asunto: Migrating 9i to 10g performance issues


Source DB:  Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
endian
Target DB:   IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
endian
Database is just shy of 1 terabyte in size--70 percent of data is in a
single table; total of 212 tables.
12 very large tables, including the monster--most have no column, like a
static date, that can be used to logically break the data into "partitions".
 
I'm working on procedures to move our production database from the Dell to
the IBM.  My tests so far indicate that I'm going to need more than 24
hours.  Management, in their infinite wisdom, is insisting that it be done
in less than 8.  It will take as long as it takes, but I'm wondering what I
can do to speed things up.  So far I've done the following:
 
1)  exp/imp - too slow overall
2)  plsql that commits at regular intervals, depending on the size of the
table - works very well for all tables under 1M; can load 111 tables in
under 2 hours using 2 concurrent sessions.  Works for larger tables, but
obviously takes much longer.  I had 2 sessions doing tables under 1M and 2
doing tables between 1M and 100M concurrently.  Didn't try for the 12 tables
over 100M.
3)  Direct-path insert - used on the table holding 70 percent of the data.
Four months ago I insisted this table have a static date column added.  I
can logically break the data loads down by date--they want the most current
data loaded first, the remainder can be done over a period of days.  This is
working reasonably well, but having done this same thing once before on this
table, I know it will take about a month to get all the data moved based on
the constraints I'm working under--can't be done during core business hours,
etc.
4)  I put the target database in noarchivelog mode for my testing.  Is this
a wise move for migrating production during the go live?
 
Manage has suggested that I leave off old data and load it later.  Doesn't
work with 95 pecent of the tables because of their structure and foreign key
constraints.  They also suggested I use both the primary and the standby
databases to read from.  No way to test this until I go live--constraints
again--although this actually was part of my plan from the beginning.  Will
too many concurrent sessions loading data slow things down too much?  What
would I look at to determine this?  10g is new to me so I'm not familiar
with all the features yet and may be missing something significant.
 
Any suggestions are appreciated, other than telling management what they can
do with their time constraint.  (Already did that.)
 
Thanks.
 
Sandy

Other related posts: