RE: Terabytes of archived redo logs

  • From: "Mercadante, Thomas F \(LABOR\)" <Thomas.Mercadante@xxxxxxxxxxxxxxxxx>
  • To: <avramil@xxxxxxxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 3 May 2007 07:47:39 -0400

Lou,

Although this is a challenge, this problem is really no different than
any other database in production.  It's just a matter of scale.

You need enough free disk space to hold, let's say, two days worth of
archivelog files.  And you also need a fast enough tape backup system so
that you can run backups, say, every two hours to keep the archivelog
files moving off of the system.

That's the theory.  Enough free disk in case you have problems with
backups and scheduled backups to keep the disk clear.

That's what I would do.

Tom


--------------------------------------------------------
This transmission may contain confidential, proprietary, or privileged 
information which is intended solely for use by the individual or entity to 
whom it is addressed.  If you are not the intended recipient, you are hereby 
notified that any disclosure, dissemination, copying or distribution of this 
transmission or its attachments is strictly prohibited.  In addition, 
unauthorized access to this transmission may violate federal or State law, 
including the Electronic Communications Privacy Act of 1985.  If you have 
received this transmission in error, please notify the sender immediately by 
return e-mail and delete the transmission and its attachments.


-----Original Message-----

From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Lou Avrami
Sent: Thursday, May 03, 2007 1:09 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: Terabytes of archived redo logs

Hi folks,

I'm about to inherit an interesting project - a group of five 9.2.0.6
databases that produce approximately 2 terabytes (!) of archived redo
log files per day.

Apparently the vendor configured the HP ServiceGuard clusters in such a
way that it takes over an hour to shut down all of the packages in order
to shut down the database.  This amount of downtime supposedly can't be
supported, so they decided to go with online backups and no downtime.

Does anyone out there have any suggestions on handling 400 gig of
archived redo log files per day?  I was thinking of either a
neear-continuous RMAN job or shell cron that would write the logs to
either tape or a storage server.  Actually, I think that our tape
library might be overwhelmed also by the constant write activity.  My
thinking right now is a storage server and utilizing a dedicated fast
network connection to push the logs over.  Storage though might be an
issue.

If anyone has any thoughts or suggestions, they would be appreciated.
BTW, I already had the bright idea of NOARCHIVELOG mode and cold
backups.  :)

Thanks,
Lou Avrami




--
//www.freelists.org/webpage/oracle-l
--
//www.freelists.org/webpage/oracle-l


Other related posts: