RE: extreme high 'db file parallel wait' on ZFS

I opened an SR with Oracle back in November of 2009 to address the 
filesystemio_options for a Solaris database using ZFS.  Oracle refused to make 
any statement and directed us to work with the O/S vendor.  (Aren't they the 
same company now?  :) )

Long story shortened, we paid Oracle consultants big bucks to help performance 
tune one of our key engineering application databases - and low and behold they 
told us to switch filesystemio_options from asynch to setall for Solaris 10.  
We are still not completely out of the woods - and that was just one of 
probably 20 or 30 parameters we changed.

This particular e-mail group either lacks interest in ZFS - or doesn't want to 
share their secrets.  It will be interesting to see where Oracle goes now that 
the own ASM and ZFS.

Bill

________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Andrew Kerber
Sent: Monday, April 26, 2010 9:02 AM
To: gengmao@xxxxxxxxx
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: extreme high 'db file parallel wait' on ZFS

I am not really that familiar with ZFS, but is async IO or its equivalent 
turned on for that drive?
On Mon, Apr 26, 2010 at 6:03 AM, gengmao 
<gengmao@xxxxxxxxx<mailto:gengmao@xxxxxxxxx>> wrote:
Hello,

I am running a batch job on an oracle 10.2.0.3 database, which locates on a ZFS 
file system. The job runs very slow. Got the AWR report, it shows following top 
events.

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file parallel write                  544       5,579  10256  220.2 System I/O
log file parallel write               6,345       1,079    170   42.6 System I/O
CPU time                                            697          27.5
log file switch (checkpoint in          651         638    981   25.2 Configurat
db file sequential read             241,752         546      2   21.5   User I/O

Obviously it is very IO intense. But I can't understand why the average 'db 
file parallel write' is so high! Normally it should close to the average I/O 
service time, right?

Following is a sample output from iostat -xn 5. Seems the asvc_t is not very 
high.
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t0d0
    0.0    1.8    0.0    6.0  0.0  0.0    0.0    0.1   0   0 c0t0d0
   28.2   27.4 3596.2 1140.8  0.0  0.6    0.0   10.7   0  20 c0t1d0
   30.6   17.2 3916.2 1204.4  0.0  1.0    0.0   21.5   0  29 c0t2d0
   28.2   22.2 3609.0 1261.5  0.0  0.8    0.0   16.8   0  18 c0t3d0
   36.0   12.8 4581.7 1268.1  0.0  1.1    0.0   21.9   0  24 c0t4d0
   31.6   12.6 4044.2 1257.2  0.0  0.8    0.0   17.3   0  22 c0t5d0
   31.6   12.6 4044.2 1268.9  0.0  0.9    0.0   20.8   0  29 c0t6d0
   30.4   12.8 3890.6 1177.7  0.0  1.0    0.0   22.5   0  27 c0t7d0

Following is a sample output from zpool iostat -v 5:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
oracle       332G   620G    175  1.33K  22.0M  31.9M
  c0t1d0    47.3G  88.7G     23    196  2.97M  4.69M
  c0t2d0    47.4G  88.6G     31    194  3.99M  4.53M
  c0t3d0    47.4G  88.6G     29    195  3.67M  4.43M
  c0t4d0    47.3G  88.7G     20    190  2.60M  4.33M
  c0t5d0    47.4G  88.6G     24    191  3.12M  4.38M
  c0t6d0    47.4G  88.6G     21    194  2.72M  4.80M
  c0t7d0    47.3G  88.7G     23    198  2.90M  4.74M
----------  -----  -----  -----  -----  -----  ----

Could anyone explain the extreme high 'db file parallel wait'? Is it possible 
tune zfs to get better performance?

Thanks in advance!
Eric Geng



--
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

Other related posts: