Re: Refreshing a multi TB database

  • From: "Alexander Fatkulin" <afatkulin@xxxxxxxxx>
  • To: veeeraman@xxxxxxxxx
  • Date: Mon, 19 Mar 2007 15:47:39 +1000

Ram,

you can also consider the following scenario if you're on 10G. There
is some restrictions evolved but I think it's worth considering.

1. create a physical standby from production box. We will use it as a
future test ground DB.
2. enable flashback on this newly created standby.
3. activate it and open in a read write mode.

Now you have fully workable clone from production DB.

At the end of the day when you need a fresh copy:

1. close the test DB.
2. mount and flash it back to before activation point.
3. convert back to physical standby and rollforward using archivelogs from prod.
4. activate and open it in read write again.

This can (or can not!) be significantly faster than doing a full db
restore depending on situation. Say your prod DB doesn't generate huge
amount of logs between the refresh points and test DB doesn't generate
too many flashback logs (so to roll the whole thing back and then
forward takes less time than full restore+recover). You would also
consider running prod in a force logging mode perhaps - again this can
impact your choice.

HTH.

On 3/17/07, Ram Raman <veeeraman@xxxxxxxxx> wrote:
Hi all,

We have a huge production database that is currently 2TB and expected to
grow to 3TB soon. We have the corresponding test database that is only a
third of its size and was refreshed a while ago. I would like to see
refreshed data so we can test things more accurately in test DB.  I am
pressing for more space and better refresh methods, but I want to come up
with some suggestions. Can you please share your opinions on this. Thanks.

Some things I can think of: We have to consider the network bandwidth
between the two servers, the space, the activity in the test DB. Is there
anything else.

Ram.



--
Alexander Fatkulin
--
//www.freelists.org/webpage/oracle-l


Other related posts: