First, regarding what is apparently your actual goal (rather than your question about technical tactics for a single strategy): 1) Have you tested the time to clone (or split off a plex of) an image of the storage underlying your disk groups to some form of portable storage? 2) What is the distance between the data centers, and is there a decent road between them? Depending on the answers to these questions, it might well be better to use trucknet. (Look up quotes from Tim Gorman and me and others to the effect: Don't underestimate the bandwidth of a box of tapes on a bus cruising down the interstate.) See also: sneakernet (Charlie Muntz, we used to call athletic shoes sneakers, and running a 3.5 inch firmie down the hall was faster than 44Kbaud), busnet, taxinet, and jetnet. 3) Does the bulk of your table data size (which includes index organized tables, but none of the other indexes or size allocated for system objects, undo, redo, and the like) exist in tables that conform to sqlplus copy? Can size of the tables not conforming to sqlplus copy capabilities reasonably be datapumped out, transmitted, and be datapumped in? You appear to have lots of horsepower, so perhaps you can peg your available network with pull traffic via sqlplus copy of the tables (perhaps largest first), and simply create the indexes on the remote side. This is worth timing a sample test. You will probably be surprised what happens if you use proper arraysize and copycommit settings. Pull streaming assigning remote nodes with affinity by disk group may minimize the RAC tax. (Remember to set long to at least as big as the longest long in a give table being moved or it will silently truncate to whatever long is set to). If your network bandwidth is sufficient that you can drive performance issues on the underlying disk media we should speak offline about configuring the underlying storage so that you have independently operation pools of i/o horsepower. Second, why are you not copying the disk images underlying the disk groups and then just mounting the disk group, rather than trying to copy out the bits from one ASM and then shovel them into another? "All the real ASM metadata is inside the diskgroup just like any other file system. A diskgroup is self describing. If an ASM instance can see all the disks in a diskgroup then it can mount it. A diskgroup can be moved from one cluster to another just by changing the storage to be visible to the new cluster. Then it can be mounted on the new cluster. For safety it is best to not be visible to more than one cluster." I'll let you guess the author. And that last bit about more than one cluster - he meant simultaneously. Since you're making a copy, that won't be a problem. In fact the only problem I can think of is if you named your diskgroups exactly the same things. Regards, mwf _____ From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of dd yakkali Sent: Thursday, January 20, 2011 3:14 AM To: oracle-l@xxxxxxxxxxxxx Subject: very large RAC database copy to another RAC Dear DB admin gurus, I need some thoughts on this. We have a very large NOArchivelog database 11.1.0.7 RAC 20 node RHEL4, 300+T spread on about 50 tablespaces, we want copy this to another RAC cluster RHEL5 30 nodes in a different data center connected with pretty good network infrastructure between them. They both use ASM of course. We are probably allowed to be down on the source for about 48 Hours max. So our aim is to parallelize(increase throughput by utilizing all the nodes of both RAC systems ) the copy from one ASM to another ASM directly. I want to know what techniques are available to do this. Few techniques that came up in the discussion are RMAN duplicate database, dbms_file_transfer via dblink to copy datafiles using transportable tablespaces technique etc. I want to know if any of you out there did dbms_file_transfer between databases and if you did, can you please share your experiences. Any thoughts/pointers are greatly appreciated. thanks Deen