Re: DBWR - How Many is Too Many?

  • From: "Greg Rahn" <greg@xxxxxxxxxxxxxxxxxx>
  • To: david.barbour1@xxxxxxxxx
  • Date: Wed, 27 Feb 2008 13:49:27 -0800

My first question to you (and others who increase the number of dbwr
processes): Are you seeing 'free buffer waits' in your 'Top 5' in the
Statspack report?  If not, it is unlikely that adding more dbwr process will
yield any benefit.

If you are experiencing high wait IO (>40% is high), then what benefit is
there to potentially do more IO, by adding dbwrs?  I would think that would
make things worse.  You need to get to the root cause of why the WIO is
high.  My suggestion is start by investigating the iostat data (what are the
IO response times, lun queue depth, etc).  It sounds quite likely that if
you even use a simple file create test, the IO would be poor.  I would use
/usr/sbin/lmktemp and do some lower level testing.

The other observation I would make is that if you changed the SAN, and not
the database, and it worked ok before, then in all likelihood it is not a
database problem.  No?


On 2/27/08, David Barbour <david.barbour1@xxxxxxxxx> wrote:
>
> sar is scary (just a small portion)
>
> 00:00:00    %usr    %sys    %wio   %idle   physc
> 02:15:01      19      19      42      19    4.00
> 02:20:00      21      25      40      14    4.00
> 02:25:00      19      18      43      20    4.00
> 02:30:00      18      18      43      21    4.00
> 02:35:00      20      24      40      16    4.00
>
> We're running JFS2 filesystems with CIO enabled, 128k element size on the
> SAN and AIO Servers are set at minservers = 220 and maxservers = 440
> We've got 32GB of RAM on the server and 4 CPUs (which are dual core for
> all intents and purposes - they show up as eight).


> I'm thinking the data block waits is the result of  too many modified
> blocks in the buffer cache.  Solution would be to increase the number of
> db_writer_processes, but we've already got 4.  Metalink, manuals, training
> guides, Google, etc.  seem to suggest two answers.
>
> 1.  One db writer for each database disk - in our case that would be 8
> 2.  CPUs/8 adjusted for multiples of CPU groups - in our case that would
> be 4
>
> Any thoughts?
>
>


-- 
Regards,

Greg Rahn
http://structureddata.org

Other related posts: