RE: Export/Import data thru pipe from prod to test server

  • From: <Joel.Patterson@xxxxxxxxxxx>
  • To: <nigel_cl_thomas@xxxxxxxxx>, <DIANNA.GIBBS@xxxxxxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 13 Feb 2007 08:24:12 -0500

Here is several examples of scripts.   I hope the forum takes
attachments, but you are also addressed.

Transfer.ksh, expQTDBpipe.ksh, and exp_pipes.ksh are self explanatory
and stand alone.   Some explanation exists in pipes.txt.

However the remaining four script work as a team, and can be run again
and again by observing the naming convention.  They export and import
data via pipes across two servers.  Source on one server, destination on
the other, pipes on both.

It was run on a HP-UX, and you have to open up security a bit as
described, (inetd.conf etc.)


So, Start the import by executing the .sh which kicks off the .ksh.
Then begin the export by executing the .sh which calls the .ksh.

Of course you have to have created the pipes first on each machine.  The
rest is convention.   Even if you are do not use this or are unable to
put it together, they should provide good examples.   Once you get the
hang of it, I am sure the picture will clear up.


#!/bin/ksh 
########################################################################
####
#  1.  On destination server edit /etc/inetd.conf and uncomment 'shell'
line.
#  2.  Place servername and username in $HOME/.rhosts file.
#  3.  Restart inetd if changed.  'inetd -c'.
#  4.  Change DB and/or servername; make sure you have correct username
and 
#      export parameters. 
#  5.  root 'mknod <pipename> p'  then chown  
########################################################################
#### 
#initialize variables
########################################################################
#### 
DB=OTG 
DESTINATION_SERVERNAME=stpprod2
USERID=SYSTEM
DESTINATION_FILE=/home/joelp/pipe/i${DB}pipe
export_file=/u01/app/oracle/joelp/pipe/e${DB}pipe
log_file=/u01/app/oracle/joelp/pipe/log/exp${DB}full.log

. /usr/local/bin/oraenvnew ${DB}
today=`date '+%y/%m/%d %H:%M:%S'` 

. $HOME/scripts/getpassword ${DB} ${USERID}
passwd=${password}

echo "KEYWORDGREP ${DB} Began at:  ${today}" 
echo "export_file = ${export_file}"
echo "log_file = ${log_file}"

nohup exp ${USERID}/${passwd} file=${export_file} log=${log_file} full=y
consistent=yes compress=yes direct=y & 

cat ${export_file} | rsh ${DESTINATION_SERVERNAME} dd bs=4000b ">>"
${DESTINATION_FILE}

today=`date '+%y/%m/%d %H:%M:%S'` 
echo "KEYWORDGREP ${DB} Ended at:  ${today}
**************************************************************

Joel Patterson
Database Administrator
joel.patterson@xxxxxxxxxxx
x72546
904  727-2546

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Nigel Thomas
Sent: Tuesday, February 13, 2007 3:33 AM
To: DIANNA.GIBBS@xxxxxxxxxxxxx; oracle-l
Subject: Re: Export/Import data thru pipe from prod to test server

Dianna
You wrote:

"We're AIX 5.2 Oracle 9206 and need to export data from prod to our test
instance for refresh.I thought I had read where I could use pipes and
export/move data to
test box/import all at the same time thus eliminating disk space for the
export file.
Any thoughts for doing this?  Thanks."

Yes, this is a well known technique.

Before you start you need to clear down the target schema(s) - ie drop
all the objects in them (if you want a complete refresh), or clear all
the data (if it is a data-only transfer - ie if the structure is
unchanged; don't forget you'll probably need to disable constraints
too).

Then the basic sequence is, as best I remember offline:
#############
# create a fifo  

 rm -f mypipe           # just to make sure there's not a real file
there

 mknod mypipe p     # the p identifies it is a pipe you want to make

# start a background process importing from the fifo

 imp target/password file=mypipe fromuser=<source> touser=<target>
<other parameters> &

# start a background process exporting to the fifo

exp source/password file=mypipe <other parameters>  &

# wait for them to finish

wait

##############

1) If this is going to take a while, consider using nohup for both the
background commands (then it won't get terminated if you log off)
2) You'll see the logging from both processes intermingled - you can
redirect the stdout to separate imp/exp log files as required
3) Of course you can parallelise this by exporting table subsets in
multiple (exp|imp)  pairs - but don't forget that table export doesn't
include code etc...

HTH

Regards Nigel




--
//www.freelists.org/webpage/oracle-l
--
//www.freelists.org/webpage/oracle-l


The attached scripts are meant to pretty much automate what I was doing -- 
which was exporting from a UNIX database and simultaneously importing to a UNIX 
database.  You will have to edit to get rid of functions you don't use, like 
getpassword and oraenvnew.  

Once the pipes are made, Note the naming convention (DB in name), one really 
only has to change the DB at the top.  Some explantion is at the top.  Remember 
your permissions for remote shell, ksh, and sh.   Note that destination_dir 
used by the rsh command is on the destination server, so... theorically you are 
exporting to a pipe as your dmp file, and cat rsh that pipe to one on the 
destination, and importing on the destination.  I believe you need to start the 
import first and it will wait until it gets some data from the export.  If you 
go the other way around it either doesn't work or your pipe fills up causing 
you to run out of space anyway.

I think these scripts work together but they are kept for reference only.  Make 
your pipes first.  If you have questions let me know.

You can always test most of these lines by hand first, and of course hard code 
some values.  Not sure how you can use this concept with NT.

Other related posts: