Re: issue with wait on log buffer space. Oracle recommends enable write cache??

  • From: Li-Shan Cheng <exriscer@xxxxxxxxx>
  • To: jaco.polet@xxxxxxxxx
  • Date: Tue, 31 Jan 2006 15:35:38 +0100

Can you try and see how fast is the disk where you placed the redo logs? Do
some dd to carry the test...



On 1/31/06, Jaco Polet <jaco.polet@xxxxxxxxx> wrote:
>
> Tried this but it didn't improve
>
> 2006/1/31, Vlad Sadilovskiy <vlovsky@xxxxxxxxx>:
> >
> > Solaris on UFS file systems may cause aync IO to be slow. Even though
> > you have set it to "forcedirectio"
> >
> > In that case it is not enough just to turn off disk_async_io. You might
> > want to try "_lgwr_ async_io"=FALSE
> >
> >
> > On 1/31/06, Jaco Polet <jaco.polet@xxxxxxxxx > wrote:
> > >
> > > Anand
> > >
> > > Thanks for your feedback.
> > > My answers are inline
> > > I will test with a 2Mb log buffer
> > >
> > > Jaco
> > >
> > >
> > > 2006/1/31, Anand Rao <panandrao@xxxxxxxxx>:
> > > >
> > > > *Hi,
> > > >
> > > > what was the size of your log_buffer and what is the current size?
> > > > you are only inserting about 32000 rows, so it shouldn't be a big deal. 
> > > > how
> > > > frequent is this data load job? *
> > >
> > >
> > > We doubled the log buffer from 0.5 Mb to 1Mb. No effect.
> > > We only copied the data from one table to another. As the log shows
> > > it's only 32000 rows stored in +/- 10 Mb.
> > >
> > > *'log buffer space' waits are generally caused due to a small log
> > > buffer relative to the amount of redo you are generating.
> > >
> > > there is no direct reason why db_writer_processes has to be
> > > increased.....for 'log buffer space' waits. If you have configured
> > > Async IO on the OS side, then you should use disk_ async_io = TRUE.
> > > Are you using RAW volumes for your redologs and datafiles? else, it 
> > > doesn't
> > > make sense to turn on async IO.
> > > *
> > > **
> > >  We changed disk_async_io to FALSE, set db_writer_processes to 2. We
> > > are not using
> > >
> > > *I can't remember too many problems with Solaris 8 Async driver, but
> > > you are better of checking it with your sysadmin as well searching 
> > > Metalink
> > > for any async IO related patches.
> > >
> > > If you are using UFS or VxFS for redo and datafiles, then what you
> > > need to do is reduce your Filesystem Buffer Cache. Are you using Direct IO
> > > or some new version of this 'near RAW performance Filesystem booster
> > > package' ? you need to consider all these filesytem config options before
> > > deciding to use Async IO.
> > > *
> > > We use Direct IO
> > > * *
> > > *When changed **you need to see if your Disk IO thoroughput is able to
> > > handle the workload generated. you may have very few and slow disks 
> > > causing
> > > LGWR to be slow writing data from the log buffer to the redolog files.
> > > * *
> > >
> > > use a 1MB log buffer and see what you get. it all depends on your redo
> > > rate.
> > >
> > > regards
> > > anand *
> > >
> > > >
> > > >
> > > > On 31/01/06, Jaco Polet < jaco.polet@xxxxxxxxx > wrote:
> > > > >
> > > > >   Hello,
> > > > >
> > > > > I have a general issue with loading data into a Oracle 
> > > > > 9.2.0.4database on solaris 8. I reduced it down to a simple statement 
> > > > > which takes 1
> > > > > second on my PC (oracle 9.2.0.6) but takes 20 seconds on the
> > > > > 9.2.0.4 database on solaris.
> > > > > I traced it and the trace showed a 19 second wait on log buffer
> > > > > space. (see trace output)
> > > > > I increased the value of log_buffer and set disk_async_io=FALSE
> > > > > (increasing the db_writers) but this didn't make a difference
> > > > > On metalink I found note 263652.1 saying that this was a general
> > > > > issue on all platforms which could be fixed by switching on write 
> > > > > cache. (It
> > > > > doesn't state that is is fixed in a next version)
> > > > > It sounds strange to me that Oracle recommends to write the redo
> > > > > log to cache. Is this correct and if so how do I switch this on  (the
> > > > > redologs are on local disks) ? I am also surprised that when 
> > > > > searching the
> > > > > internet I don't get any hits on this although it looks like a general
> > > > > issue. What am I missing?
> > > > >
> > > > > Hope someone can help me with this... Jaco Polet
> > > > >
> > > > > The formatted trace:
> > > > >
> > > > > insert into t_jpo select * from test_tabel
> > > > >
> > > > > call     count       cpu    elapsed       disk      query
> > > > > current        rows
> > > > > ------- ------  -------- ---------- ---------- ----------
> > > > > ----------  ----------
> > > > > Parse        1      0.01       0.00          0          0
> > > > > 0           0
> > > > > Execute      1      0.90      20.08          0       2920
> > > > > 7664       32114
> > > > > Fetch        0      0.00       0.00          0          0
> > > > > 0           0
> > > > > ------- ------  -------- ---------- ---------- ----------
> > > > > ----------  ----------
> > > > > total        2      0.91      20.09          0       2920
> > > > > 7664       32114
> > > > >
> > > > > Misses in library cache during parse: 1
> > > > > Optimizer goal: CHOOSE
> > > > > Parsing user id: 1340  (PERF)
> > > > >
> > > > > Rows     Row Source Operation
> > > > > -------  ---------------------------------------------------
> > > > >   32114  TABLE ACCESS FULL TEST_TABEL (cr=1182 r=0 w=0 time=96619
> > > > > us)
> > > > >
> > > > > error during execute of EXPLAIN PLAN statement
> > > > > ORA-00942: table or view does not exist
> > > > >
> > > > > parse error offset: 109
> > > > >
> > > > > Elapsed times include waiting on following events:
> > > > >   Event waited on                             Times   Max. Wait
> > > > > Total Waited
> > > > >   ----------------------------------------   Waited  ----------
> > > > > ------------
> > > > >   log buffer space                               43        1.00
> > > > > 19.18
> > > > >   SQL*Net message to client                       1        0.00
> > > > > 0.00
> > > > >   SQL*Net message from client                     1        2.49
> > > > > 2.49
> > > > > ********************************************************************************
> > > > >
> > > > >
> > > >
> > > >
> > >
> >
>

Other related posts: