Terabytes of archived redo logs

  • From: Lou Avrami <avramil@xxxxxxxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 03 May 2007 01:08:34 -0400 (EDT)

Hi folks,

I'm about to inherit an interesting project - a group of five 9.2.0.6 databases 
that produce approximately 2 terabytes (!) of archived redo log files per day.

Apparently the vendor configured the HP ServiceGuard clusters in such a way 
that it takes over an hour to shut down all of the packages in order to shut 
down the database.  This amount of downtime supposedly can't be supported, so 
they decided to go with online backups and no downtime.

Does anyone out there have any suggestions on handling 400 gig of archived redo 
log files per day?  I was thinking of either a neear-continuous RMAN job or 
shell cron that would write the logs to either tape or a storage server.  
Actually, I think that our tape library might be overwhelmed also by the 
constant write activity.  My thinking right now is a storage server and 
utilizing a dedicated fast network connection to push the logs over.  Storage 
though might be an issue.

If anyone has any thoughts or suggestions, they would be appreciated.
BTW, I already had the bright idea of NOARCHIVELOG mode and cold backups.  :)

Thanks,
Lou Avrami




--
//www.freelists.org/webpage/oracle-l


Other related posts: