Re: Export (exp) or Datapump (expdp)

  • From: rjamya <rjamya@xxxxxxxxx>
  • To: oratips@xxxxxxxxx
  • Date: Mon, 5 Aug 2013 08:46:25 -0400

I think 'most optimal' is completely up to you and your environment. If
your databases are far apart or you have limited bandwidth or if you work
like a company like mine where no other db/host can ever talk to a
production host, then using NETWORK_LINK is not useful or will only hurt
you (in case of limited bandwidth). In which case you might want to expdp
to local dump files, after completion move them across then impdp them.
Just because your dictionary says table is 24gb, doesn't mean export will
be 24gb, furthermore if you have license of compression, dump files will be
smaller. INDEX creation and constraints is a cost you will pay no matter
which method you choose (with or without network_link).
If you are exporting to local dump files, use parallel option (and use
multiple dumpfiles else it is useless), Even then there is no guarantee
that oracle will use the parallel option anyway. if you are on a cluster,
try specifying cluster=n to keep all dw processes on same node. of and if
that table of your's has a CLOB or BLOB, good luck then. it has many
limitations.

Good luck.

Raj


On Fri, Aug 2, 2013 at 3:37 PM, Bala <oratips@xxxxxxxxx> wrote:

> I have a requirement to copy 80 mllion rows table from PROD to DEV.
> This table is 28 GB in size, has 4 indexes built on its columns.
>
> What would be the most optimal way to do it -- use traditional export or
> datapump (expdp) ,any insight will be greatly appreciated.
>


--
//www.freelists.org/webpage/oracle-l


Other related posts: