RE: Migrating 3TB size database from HP to AIX, also 9i to 10g.

  • From: j.miranda@xxxxxxxxxxxx
  • To: kennaim@xxxxxxxxx
  • Date: Thu, 20 Sep 2007 15:59:17 +0200 (CEST)



This script start 4 simultaneous export-import.
The export writes to pipe and then rsh´d to the other server.
Import read the export´s flow from other pipe.

There are no intermediate write between export and import.

##!/bin/bash -x
#

for tb in `cat tablas.txt`
do

echo "Lanzo import en LOCAL: " $tb
echo mkfifo /oracle10/tmp/$tb.pipeimp
echo imp file=/tmp/$tb.pipeimp log=/tmp/imp_$tb.log parfile=/tmp/imp.par


echo "Lanzo export en REMOTOR: " $tb
#echo mkfifo /tmp/$tb.pipeexp
#echo exp file=/tmp/$tb.pipeexp log=/tmp/exp_$tb.log tables=DWP.$tb
parfile=/tmp/exp.par & remsh hpdwprod  'cat > /tmp/$tb.pipeimp' <
/tmp/$tb.pipeimp

rsh -l oracle10 nodo1 ". /oracle10/.bash_profile ; mkfifo
/oracle10/tmp/$tb.pipeexp ; exp br/br tables=$tb
file=/oracle10/tmp/$tb.pipeexp owner=br & rsh -l oracle10 nodo2  'cat >
/oracle10/tmp/$tb.pipeimp' < /oracle10/tmp/$tb.pipeexp" &

echo -----------------------------------------------------

echo "Compruebo numero de imports lanzados"
ps aux |grep imp |grep -v grep  > /tmp/import.txt
imports=`wc -l /tmp/import.txt |awk '{print $1}'`
echo "numero de import ejecutandose: " $imports

#Si hay mas de 4 imports espero
while  [ $imports -ge 4 ];
do
  echo imports $imports mayor que 4 .... duermo
  sleep 3
  ps aux |grep imp |grep -v grep  > /tmp/import.txt
  imports=`wc -l /tmp/import.txt |awk '{print $1}'`
  echo $imports
done

done


----------------
imp.par:
USERID=system/passwd
statistics=none
fromuser=RESTORES
touser=RESTORES
buffer=50000000
constraints=n
grants=n
indexes=n
ignore=Y
----------------
exp.par:
USERID=system/passwd
direct=y
buffer=50000000
recordlength=65535
indexes=n
constraints=n
statistics=none
grants=n


We tried this method but the bottleneck is the redolog writing.
There are a hidden parameter that disable overal logging but isn´t
recomended, of course.

Hope this help you.

greetings.


> I migrated an Oracle Apps database 2 weeks ago from Dell to PowerPC IBM
> hardware which had a different endianness so we had to use export/import
> as
> Oracle Apps is not self contained so we couldn't use transportable
> tablespaces. We used datapump which took 2 hours for the expdp and 12
> hours
> for impdp. Out of the 12 hours, less than 2 hours was actually used for
> copying rows. The rest of the time was used for building all other
> objects.
> We used the parallel setting in datapump set at 64 and while testing
> noticed
> that many operations are not run in parallel. Each table is loaded during
> a
> single thread so 99% of our tables were loaded in under 30 minutes and 2
> threads remained running for our large 200+ million row tables which
> contained lobs. Large heap non-lob tables loaded quickly.
>
>
>
> Indexes are built one at time with the parallel setting number of slaves.
> This may be to your advantage as I suspect it performs better with fewer
> large indexes rather than many smaller ones. Package headers are loaded in
> parallel which can cause deadlocks at which point impdp dies but can be
> restarted. Package bodies are loaded serially which was our major source
> of
> pain as it took many hours to load and later compile 40,000 large package
> headers and bodies.
>
>
>
> Make sure to set your streams size, as the default is 16m which will blow
> out on your first large table and it will die, although it an be restarted
> it will continue from the beginning of the data load and it will load
> duplicate data.
>
>
>
> I don't remember if table stats are run in parallel or serially but I
> don't
> recommend letting impdp run them as it may not use the method you
> typically
> use and it is better run post migration.
>
>
>
> Advanced queuing can be a real pain with impdp, if you use this feature
> (headache) make sure to test it very well.
>
>
>
> In short if your application is self contained which is likely for a data
> warehouse, I'd prefer to use the transportable tablespace as impdp has
> many
> pitfalls.
>
>
>
> Ken
>
>
>
>
>
>   _____
>
> From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
> On Behalf Of Syed Jaffar Hussain
> Sent: Wednesday, September 19, 2007 6:44 AM
> To: oracle-l@xxxxxxxxxxxxx
> Subject: Migrating 3TB size database from HP to AIX, also 9i to 10g.
>
>
>
> Hello list,
>
>
>
> I have a uphill taks of migrating our datawarehouse Oracle 9i database,
> 3TB
> size, from HP Superdom to AIX, Oracle 10g.
>
> Since the migration is across platofrms, exp/imp is the obvious option.
> But,
> 3TB size databse, exp/imp may take just too much time.
>
>
>
> I have thought the following:
>
>
>
> Upgrading 9i database on AIX to 10g (in place upgrade)
>
> Then, using the method of database cross platform conversion from HP to
> AIX.
>
>
>
> The problem with this method is that the temporary space required to hold
> the converted data files, it willd be difficult to have 3tb temporary
> sapce.
>
>
>
> Do you see any another solution for this task?
>
>
>
> Regards,
>
>
>
>
>
> Jaffar
>
> --
> Best Regards,
> Syed Jaffar Hussain
> Oracle ACE
> 8i,9i & 10g OCP DBA
>
> http://jaffardba.blogspot.com/
> http://apex.oracle.com/pls/otn/f?p=19297:4:1579866181463918::NO:4:P4_ID:126
> ----------------------------------------------------------------------------
> ------
> "Winners don't do different things. They do things differently."
>
>

--
//www.freelists.org/webpage/oracle-l


Other related posts: