Re: Running import datapump over a database link.

  • From: "Yong Huang" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "yong321@xxxxxxxxx" for DMARC)
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Fri, 8 May 2015 08:28:19 -0700

Jeremy Schneider wrote:

if I remember correctly, nothing is compressed when you data pump over a
dblink.

Maybe you're referring to this?

IMPDP using network_link ignores table compression in 10gR2 (Doc ID 975822.1)

The problem does not exist in 11g.

Speaking of tranferring a dump file over, I have some success with using NFS to
avoid scp'ing the file across nodes. Just make sure the NFS read and write
buffer is set larger, e.g.

mount -t nfs -o hard,intr,rsize=32768,wsize=32768 nfssvr:/path /path

You can use another box as the NFS server, or use one of the two nodes involved
in data pump as the server. If the source and target data pump nodes are about
equal in performance, it's probably better to choose the source as the NFS
server, because during impdp, the bottleneck may be on the target side instead
of the network. But test both to find out. If you can't write logfile to the
NFS mount point, just write it somewhere else, e.g.
logfile=somelocaldir:impdp_from_nfs.log or don't log (nologfile=yes).

Yong Huang
--
//www.freelists.org/webpage/oracle-l


Other related posts: