RE: Sizing Redo log files

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <niall.litchfield@xxxxxxxxx>, <jkstill@xxxxxxxxx>
  • Date: Tue, 26 Jun 2012 11:47:02 -0400

Another point being that you can increase the peak "cache" total volume by
having enough log groups without overinflating the size of each log. This is
especially useful if from time to time your redo generation rate in fact
exceeds the rate at which you can archive. If the online logs wrap, updates
hang until the archiver catches up to free an online redolog group. The
default of 3 online redo log groups is not intended to handle high
performance loads. If you have a lot of total online redolog size, this may
get you through windows where you are generating logs faster than you can
archive them.

As long as arch based standbys are not required to be kept in sync with the
primary site (meaning either no standby, lgwr based standby, or some other
replication technology), arch can fall quite far behind if you have plenty
of total space. Adding groups rather than excessive size per log has the
advantage of not delaying a switch checkpoint too long and keeping a lot of
things at a reasonable monolith size (and what is reasonable will vary based
on your detailed operational needs AND with changes in hardware technology.)

IF (not a great idea in my opinion) the online redo logs and the primary
archived redo log location share the same underlying media, including any
pacing pieces of the technology stack, burst performance demands may require
you to generate logs faster than you can archive them. As long as the online
logs are themselves at a reasonable level of plexing, that is secure to the
limit of site storage failures. Whether that level of safety meets your
operational requirements will vary.

In low performance requirement sites, the acceptable range for the SWAG for
online redo log size is large and forgiving. For high performance
requirements you should set up a test system to drive and test to the
various failures and adjust size and count of groups to find a sweet spot
that meets your requirements.

Remember to stop when you reach your performance goal (or exceed it by some
reasonable margin for error). (See Gaja Krishna Vaidyanatha re: CTD)


-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Niall Litchfield
Sent: Tuesday, June 26, 2012 9:49 AM
To: jkstill@xxxxxxxxx
Cc: Prasad_Subbella@xxxxxxxxxxx; Oracle-L@xxxxxxxxxxxxx
Subject: Re: Sizing Redo log files

I'm pretty sure you have that the wrong way around. I reckon that you've
required time to cycle through redo be *less* than time to archive this
redo, That'll hurt. I suspect you meant

Make sure that the time required to archive (N) redo logs is less than the
time taken to generate (N) redo logs. That ought pretty much to imply make
sure the sustained throughput to your arch destination is greater than your
sustained redo rate.

On Tue, Jun 26, 2012 at 2:32 AM, Jared Still <jkstill@xxxxxxxxx> wrote:

> On Wed, Jun 20, 2012 at 10:48 PM, Prasad Reddy Subbella < 
> Prasad_Subbella@xxxxxxxxxxx> wrote:
> > Hi All,
> > Please share/guide me the best practices to be used for sizing redo 
> > log files.
> >
> >
> > Here's one:
> Make sure that the time required to cycle through all (N-1) redo files 
> is less than the time it takes to have the archivelogs created for 
> (N-1) redo logs.
> This should ensure that the archiver will not bump into the redo.
> (please correct math if wrong, I am tired)
> Jared Still
> Certifiable Oracle DBA and Part Time Perl Evangelist Oracle Blog: 
> Home Page:
> --

Niall Litchfield
Oracle DBA



Other related posts: