RE: Backup for large number of archived logs per hour

  • From: "Bobak, Mark" <Mark.Bobak@xxxxxxxxxxxx>
  • To: "mdinh@xxxxxxxxx" <mdinh@xxxxxxxxx>, "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 4 Mar 2010 15:38:19 -0500

This is really a DW?  And a 200GB DW is generating 20GB of archive logs per 
hour??

The first thing I would question is the loading strategy you're using.....are 
you making use of partitioning?  Partition exchange?  Direct loads?  Nologging?

I'm wondering if there aren't vast improvements that could be made in terms of 
hugely reducing the volume of redo generation in the first place.

Of course, I know nothing of your environment, or what constraints you're 
under, but, if it's a "typical" data warehouse, where you load chunks of data 
from a flat file, there ought to be huge optimizations that could be made in 
reducing redo log generation.

Just my thoughts...worth exactly what you paid for them.... :)

-Mark

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Michael Dinh
Sent: Thursday, March 04, 2010 3:32 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: Backup for large number of archived logs per hour

As an example, if we have a 200G DW generating 20G archived log per hour, then 
what would be a more efficient way for backup?

It does not make sense to backup archived log when the entire DW is less that 
the amout of archived logs generated.

We are currently not using RMAN, but BCV and snapshot are created at the array 
level.

Thanks Michael.

Other related posts: