RE: Backup for large number of archived logs per hour

  • From: Don Granaman <DonGranaman@xxxxxxxxxxxxxxx>
  • To: "mdinh@xxxxxxxxx" <mdinh@xxxxxxxxx>, "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 4 Mar 2010 15:04:06 -0600

"Unfortunately, the consulting company does not have great Oracle knowledge =("

I nominate this for the highly-coveted Understatement of the Year Award.
________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Michael Dinh
Sent: Thursday, March 04, 2010 2:56 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: RE: Backup for large number of archived logs per hour

Thanks Mark for the response.

partitioning?  Partition exchange?  Nologging?

NO to all the above questions =(

We currently don't know what to partition, logging is required at the moment 
because of possible dataguard implementation.

I believe Direct Load is executed only on initial load.  ETL is developed by 
consultant using Business Object Data Integrator.

Unfortunately, the consulting company does not have great Oracle knowledge =(

________________________________
From: Bobak, Mark [mailto:Mark.Bobak@xxxxxxxxxxxx]
Sent: Thursday, March 04, 2010 12:38 PM
To: Michael Dinh; oracle-l@xxxxxxxxxxxxx
Subject: RE: Backup for large number of archived logs per hour

This is really a DW?  And a 200GB DW is generating 20GB of archive logs per 
hour??

The first thing I would question is the loading strategy you're using.....are 
you making use of partitioning?  Partition exchange?  Direct loads?  Nologging?

I'm wondering if there aren't vast improvements that could be made in terms of 
hugely reducing the volume of redo generation in the first place.

Of course, I know nothing of your environment, or what constraints you're 
under, but, if it's a "typical" data warehouse, where you load chunks of data 
from a flat file, there ought to be huge optimizations that could be made in 
reducing redo log generation.

Just my thoughts...worth exactly what you paid for them.... :)

-Mark

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Michael Dinh
Sent: Thursday, March 04, 2010 3:32 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: Backup for large number of archived logs per hour

As an example, if we have a 200G DW generating 20G archived log per hour, then 
what would be a more efficient way for backup?

It does not make sense to backup archived log when the entire DW is less that 
the amout of archived logs generated.

We are currently not using RMAN, but BCV and snapshot are created at the array 
level.

Thanks Michael.

Other related posts: