Re: LGWR tuning

  • From: Anand Rao <panandrao@xxxxxxxxx>
  • To: paul@xxxxxxxxxxxxx
  • Date: Wed, 15 Feb 2006 10:11:39 +0530

Hi Paul,

our application sometimes generates about 4-5MB per second. i use a log
buffer of 5MB..sometimes 6 MB. we get about 8-14% waits for log file sync,
depending on what our application does.

sometimes, on production sites and testing too, i have seen 30-40% log file
sync waits. that is due to the inadequate IO capacity, in my view.

for large systems, we use alteast 94-112 disks and use SAME for redologs and
datafiles. works fine with me :-)

like someone mentioned, it depends on your IO capacity. primary cause of log
file sync waits are,

1) Excessive COMMITs in the application
2) Slow IO susbsystem

Oracle bugs is a another distinct possibility.

A proper solution would be to avoid "over committing" but that means
application change....and translates to a lot of time, money and resources.
that in my view is a better fix. Yet, you will still need to try and attain
a well balanced IO subystem with good thoroughput to match your redo
generation rate.

cheers
anand


On 15/02/06, Paul van den Bogaard <paul@xxxxxxxxxxxxx> wrote:
>
> Hi Luca,
>
> no, it is not this one. This one is used to trigger a timely occurence
> of the next IO to prevent the logbuffer to fill up. If a commit happens
> when the whole logbuffer is full this commit could suffer a long wait
> time.
>
> In my case I have plenty of commits. These all trigger that LGWR starts
> (and constantly is) writing to disk. Looking at these writes from a
> syscall perspective I see that the max size of a single write request is
> 1MB. When looking at the data in v$sysstat I see the average is indeed
> bigger than 1MB. Diving into whats happening on the device layer I see
> what could have been a single write is chopped in chunks. These chunks
> are written in a fast sequence of write calls using async IO.
> I am looking at ways to reduce this amount of calls.
>
> Thanks
> Paul
>
> Luca Canali wrote:
> > Hi Paul,
> >
> > The parameter you are looking for is probably _log_io_size (see
> > http://www.ixora.com.au/tips/tuning/log_buffer_size.htm), but I doubt it
> > can help you. One of the reasons is that IO is fragmented also at the OS
> > (solaris >=8 default to 1 MB if I am not wrong) and HW (storage array)
> > level.
> > Tuning to reduce the redo log volume or changing the physical device
> > where you write the redo logs (like avoid raid5, avoid Read/Write
> > contention, ..the usual stuff) is normally the way to go for this kind
> > of issues.
> > You can also recreate the redo logs in /tmp (ram disk), but that's a bit
> > 'dangerous' and also kind of cheating.
> >
> > Regards,
> > L.
> > -----------
> > Luca Canali
> > Information Technology Department
> > European Organization for Nuclear Research
> > CERN
> > CH-1211 Geneva 23
> >
> > -----Original Message-----
> > From: oracle-l-bounce@xxxxxxxxxxxxx
> > [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Paul van den Bogaard
> > Sent: Tuesday, February 14, 2006 5:10 PM
> > To: oracle-l@xxxxxxxxxxxxx
> > Subject: LGWR tuning
> >
> > During a previous benchmark I learned that the LGWR will chop writes
> > into chunks of max 1MB. This results in multiple AIO send to the Solaris
> > kernel in quick succession.
> > The transactions were throttled by 'log file sync' event. Another
> > benchmark will occurr that drives this application to even higher
> > limits.
> > I wonder/expect there is a (hidden) parameter that resets this 1MB limit
> > to something more appropriate.
> >
> > Any clues how this parameter is called and should be set are very
> > welcome. HInts on side effects that come for free are welcome too :-)
> >
> > Thanks
> > Paul
> >
> > Redo files are on raw devices. A single redo file is mapped on one or
> > more disk arrays (through host based volume management) that each have
> > lots of memory backed up cache. Average write time expected to be in the
> > sub milli second level.
> > --
> > //www.freelists.org/webpage/oracle-l
> >
> >
> > --
> > //www.freelists.org/webpage/oracle-l
> >
> >
>
> --
> //www.freelists.org/webpage/oracle-l
>
>
>

Other related posts: