This is a script I did on 11g standard edition and successfuly cloned the database, but appeared a bug and I leave as it is, I hope be useful someone. CONNECT @ASUD_1.WORLD AS SYSDBA; ALTER SYSTEM CHECKPOINT; ALTER SYSTEM ARCHIVE LOG CURRENT; EXIT --CREATE DIRECTORIOS -- CREATE BD LISTENER,TNAMES -- COPY INIT AND RENAME ADD THIS PARAMETER TO NEW INIT log_file_name_convert=('E:\ORAASUD\logfiles','E:\ORACLONEBD\logfiles','d:\ORAASUD\logfiles','d:\ORACLONEBD\logfiles') db_file_name_convert=('E:\ORAASUD\datafiles','E:\ORACLONEBD\datafiles','d:\ORAASUD\datafiles','d:\ORACLONEBD\datafiles','E:\ORAASUDREADONLY','E:\ORACLONEBDREADONLY','D:\ORAASUDREADONLY','D:\ORACLONEBDREADONLY') SQLPLUS /NOLOG CONNECT SYS@CLONEBD_1.WORLD AS SYSDBA; SHUTDOWN IMMEDIATE; STARTUP NOMOUNT PFILE=E:\ORACLONEBD\INIT\INITCLONEBD.ORA; EXIT RMAN NOCATALOG connect target SYS/ASUDPASSWORD@ASUD_1 crosscheck backup; connect auxiliary SYS/CLONEBDPASSWORD@CLONEBD_1 --duplicate target database to CLONEBD until time '08/03/2010:23:00:00' duplicate target database to CLONEBD; EXIT SQLPLUS /NOLOG CONNECT SYS/CLONEBDPASSWORD@CLONEBD_1.WORLD AS SYSDBA; **RECREAR SPFILE SHUTDOWN IMMEDIATE EXIT RENAME E:\ORACLONEBD\DATAFILES\DFL_SYS_ASUD DFL_SYS_CLONEBD RENAME D:\ORACLONEBD\DATAFILES\DFL_BLOB_ASUD DFL_BLOB_CLONEBD RENAME E:\ORACLONEBD\DATAFILES\DFL_USER_ASUD DFL_USER_CLONEBD RENAME D:\ORACLONEBD\DATAFILES\DFL_INDX_ASUD DFL_INDX_CLONEBD RENAME E:\ORACLONEBD\DATAFILES\DFL_ASUD DFL_CLONEBD RENAME E:\ORACLONEBD\DATAFILES\DFL_LOB_ASUD DFL_LOB_CLONEBD RENAME D:\ORACLONEBD\DATAFILES\DFL_BLOB_SEC_ASUD DFL_BLOB_SEC_CLONEBD RENAME E:\ORACLONEBDREADONLY\DFL_READONLY_ASUD DFL_READONLY_CLONEBD RENAME D:\ORACLONEBDREADONLY\DFL_READONLY_IDX_ASUD DFL_READONLY_IDX_CLONEBD RENAME D:\ORACLONEBD\DATAFILES\DFL_DRSYS_ASUD DFL_DRSYS_CLONEBD RENAME D:\ORACLONEBD\DATAFILES\DFL_XDB_ASUD DFL_XDB_CLONEBD RENAME E:\ORACLONEBD\DATAFILES\SYSAUX_ASUD SYSAUX_CLONEBD SQLPLUS /NOLOG CONNECT SYS/CLONEBDPASSWORD@CLONEBD_1.WORLD AS SYSDBA; STARTUP MOUNT ALTER DATABASE RENAME FILE 'E:\ORACLONEBD\DATAFILES\DFL_SYS_ASUD' TO 'E:\ORACLONEBD\DATAFILES\DFL_SYS_CLONEBD'; ALTER DATABASE RENAME FILE 'D:\ORACLONEBD\DATAFILES\DFL_BLOB_ASUD' TO 'D:\ORACLONEBD\DATAFILES\DFL_BLOB_CLONEBD'; ALTER DATABASE RENAME FILE 'E:\ORACLONEBD\DATAFILES\DFL_USER_ASUD' TO 'E:\ORACLONEBD\DATAFILES\DFL_USER_CLONEBD'; ALTER DATABASE RENAME FILE 'D:\ORACLONEBD\DATAFILES\DFL_INDX_ASUD' TO 'D:\ORACLONEBD\DATAFILES\DFL_INDX_CLONEBD'; ALTER DATABASE RENAME FILE 'E:\ORACLONEBD\DATAFILES\DFL_ASUD' TO 'E:\ORACLONEBD\DATAFILES\DFL_CLONEBD'; ALTER DATABASE RENAME FILE 'E:\ORACLONEBD\DATAFILES\DFL_LOB_ASUD' TO 'E:\ORACLONEBD\DATAFILES\DFL_LOB_CLONEBD'; ALTER DATABASE RENAME FILE 'D:\ORACLONEBD\DATAFILES\DFL_BLOB_SEC_ASUD' TO 'D:\ORACLONEBD\DATAFILES\DFL_BLOB_SEC_CLONEBD'; ALTER DATABASE RENAME FILE 'E:\ORACLONEBDREADONLY\DFL_READONLY_ASUD' TO 'E:\ORACLONEBDREADONLY\DFL_READONLY_CLONEBD'; ALTER DATABASE RENAME FILE 'D:\ORACLONEBDREADONLY\DFL_READONLY_IDX_ASUD' TO 'D:\ORACLONEBDREADONLY\DFL_READONLY_IDX_CLONEBD'; ALTER DATABASE RENAME FILE 'D:\ORACLONEBD\DATAFILES\DFL_DRSYS_ASUD' TO 'D:\ORACLONEBD\DATAFILES\DFL_DRSYS_CLONEBD'; ALTER DATABASE RENAME FILE 'D:\ORACLONEBD\DATAFILES\DFL_XDB_ASUD' TO 'D:\ORACLONEBD\DATAFILES\DFL_XDB_CLONEBD'; ALTER DATABASE RENAME FILE 'E:\ORACLONEBD\DATAFILES\SYSAUX_ASUD' TO 'E:\ORACLONEBD\DATAFILES\SYSAUX_CLONEBD'; ALTER DATABASE NOARCHIVELOG; ALTER DATABASE OPEN; **verificar que no haya datafiles con problemas SELECT STATUS||'-'||ENABLED||'-'||NAME FROM V$DATAFILE WHERE NOT STATUS IN ('ONLINE','SYSTEM'); EXIT 2014-07-23 18:53 GMT-04:00 David Barbour <david.barbour1@xxxxxxxxx>: > Oracle 11gR2 > RHEL 6.3 > > I've been refreshing a 14TB SAP 'sandbox' instance on our test RAC for > close to a year now using RMAN "*DUPLICATE TARGET DATABASE TO <SID> FROM > ACTIVE DATABASE". *Now the organization wants to refresh a series of SAP > instances in the test environment at the same time so they'll all be in > sync. The methodologies employed to this are pretty arcane*. *However, > as part of this, we need to copy the 14TB Production instance back to > test. The method for ensuring synchronization has been - and continues to > be for the most part - to shutdown all the active production instances and > make either a datafile backup or a clone snap. So I know that they're > going to shut down the production constellation at a certain time. > Normally, out test environment goes down on Friday nights for backups, so I > have a window to perform an active duplicate from the running production > instance back to test. > > What I do know is that it takes about 10 hours to do the refresh. So if I > started on Friday night, it would finish in the wee hours (anything before > 9AM is wee hours to me) of Saturday morning and would be out of sync with > the other databases that will be shut down at 10PM on Saturday night. I > could start it Saturday afternoon so it would finish in the 'dead zone', > but I was wondering if anybody has tried to perform a point-in-time > (future) recovery of an active duplicate. > > I know a recovery and a duplicate are fundamentally different, but in the > duplicate from active database it applies the logs and redo to bring the > copy current with the source. I have looked through the documentation and > can't find any mention of using a recover clause with duplicate. Ideally > I'd like to put a recover until cancel in there and apply logs until I > reach the dead zone than let it finish up. >