Re: Backup for large number of archived logs per hour

  • From: Niall Litchfield <niall.litchfield@xxxxxxxxx>
  • To: jkstill@xxxxxxxxx
  • Date: Sun, 7 Mar 2010 08:39:20 +0000

On Sun, Mar 7, 2010 at 2:46 AM, Jared Still <jkstill@xxxxxxxxx> wrote:

>  On Sat, Mar 6, 2010 at 1:25 PM, Martin Bach <
> development@xxxxxxxxxxxxxxxxx> wrote:
>
>> Hi Michael!
>>
>> I suggest you take on the advice given by the other replies to your
>> thread - 200G database generating 20G/day seems a lot...
>>
>>
> I'm not too sure that hard and fast rules, or even rules of
> thumb can really be trusted for such things.
>
> The size of the database can be a fair predictor of how much
> redo may be generated, but the volatility of the data must
> also be considered, as well as the number of processes
> making changes.
>
>

I think that is what rang the alarm bells to begin with. The 20g was per
hour and not per day by the way. The database was described as a
datawarehouse. Now I'm sure that there will be datawarehouses that generate
redo equvalent to 10% of the database size every hour, but they really ought
to be pretty unusual.

I see that this database is being considered for Dataguard, I wonder if a
smarter strategy wouldn't be simply to have two DW targets and run the same
ETL at both of them, or an intermediate DB that does the ET and then
parallel Loads. As I see it using DG will pretty much render most of the
performance strategies traditionally used in DW development fairly useless.
Usually at this point of course Tim Gorman or Greg Rahn pop up and gently
point out how little I know.


-- 
Niall Litchfield
Oracle DBA
http://www.orawin.info

Other related posts: