Re: Backup for large number of archived logs per hour

  • From: Andrew Kerber <andrew.kerber@xxxxxxxxx>
  • To: Mark.Bobak@xxxxxxxxxxxx
  • Date: Thu, 4 Mar 2010 14:46:33 -0600

I second that, usually you load a warehouse once a day, or week, not
continually.  Many places dont run their data warehouse in archive log mode
because they can always rebuild them from existing data.  Is that a
possibility in this case?

On Thu, Mar 4, 2010 at 2:38 PM, Bobak, Mark <Mark.Bobak@xxxxxxxxxxxx> wrote:

>  This is really a DW?  And a 200GB DW is generating 20GB of archive logs
> per hour??
>
>
>
> The first thing I would question is the loading strategy you’re using…..are
> you making use of partitioning?  Partition exchange?  Direct loads?
> Nologging?
>
>
>
> I’m wondering if there aren’t vast improvements that could be made in terms
> of hugely reducing the volume of redo generation in the first place.
>
>
>
> Of course, I know nothing of your environment, or what constraints you’re
> under, but, if it’s a “typical” data warehouse, where you load chunks of
> data from a flat file, there ought to be huge optimizations that could be
> made in reducing redo log generation.
>
>
>
> Just my thoughts…worth exactly what you paid for them…. J
>
>
>
> -Mark
>
>
>
> *From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
> oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Michael Dinh
> *Sent:* Thursday, March 04, 2010 3:32 PM
> *To:* oracle-l@xxxxxxxxxxxxx
> *Subject:* Backup for large number of archived logs per hour
>
>
>
> As an example, if we have a 200G DW generating 20G archived log per hour,
> then what would be a more efficient way for backup?
>
>
>
> It does not make sense to backup archived log when the entire DW is less
> that the amout of archived logs generated.
>
>
>
> We are currently not using RMAN, but BCV and snapshot are created at the
> array level.
>
>
>
> Thanks Michael.
>



-- 
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

Other related posts: