Slower db file parallel read waits after migrating to SAN

  • From: Neil Kodner <nkodner@xxxxxxxxx>
  • To: oracle-l-freelists <oracle-l@xxxxxxxxxxxxx>, Neil Kodner <nkodner@xxxxxxxxx>
  • Date: Tue, 23 Feb 2010 06:54:17 -0500

A batch job which used to take 20-30 minutes is now taking upwards of three
hours following a change in storage.  I

The query

select count(*)
from   pm_clntmnth c
where  c.client_number = abawd_rec.client_number
and  c.benefit_month >= abawd_rec.first_abawd_month
and  c.abawd_status_code = 'AB';

executes inside of a loop.  Table pm_clntmnth is 422 million rows and is
indexed by client_number,benefit month, meaning we have to access the table,
in addition to the index.  Before I add abawd_status_code to the index, to
eliminate table access outright, I'd like to know if db file parallel read
is my culprit, or if there is something else at play after the storage was
changed.  I dont have as much experience with tables of this size so I'm
wondering if I'm missing something.  The plan on the query looks unchanged.

Here's sample 10046 trace information
EXEC #9:c=0,e=123,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1237162345257814
WAIT #9: nam='db file sequential read' ela= 13786 file#=32 block#=198624
blocks=1 obj#=246504 tim=1237162345272111
WAIT #9: nam='db file sequential read' ela= 60181 file#=33 block#=197057
blocks=1 obj#=246504 tim=1237162345332354
WAIT #9: nam='db file parallel read' ela= 476870 files=7 blocks=29
requests=29 obj#=75690 tim=1237162345809667
WAIT #9: nam='db file sequential read' ela= 47232 file#=25 block#=248564
blocks=1 obj#=75690 tim=1237162345857222
WAIT #9: nam='db file parallel read' ela= 395248 files=7 blocks=39
requests=39 obj#=75690 tim=1237162346253035
WAIT #9: nam='db file sequential read' ela= 34 file#=29 block#=99981
blocks=1 obj#=75690 tim=1237162346253668
WAIT #9: nam='db file parallel read' ela= 363176 files=7 blocks=35
requests=35 obj#=75690 tim=1237162346617350
WAIT #9: nam='db file sequential read' ela= 49773 file#=30 block#=286527
blocks=1 obj#=75690 tim=1237162346667765
WAIT #9: nam='db file parallel read' ela= 123836 files=7 blocks=30
requests=30 obj#=75690 tim=1237162346792040
WAIT #9: nam='db file sequential read' ela= 1368 file#=28 block#=95528
blocks=1 obj#=75690 tim=1237162346793987
FETCH
#9:c=13998,e=1536345,p=139,cr=156,cu=0,mis=0,r=1,dep=1,og=1,tim=1237162346794175

I dont know if the db file parallel reads were present before the storage
change, I just know that the process is taking 4x longer, and performance
was never really an issue until then.  It's not a matter of the procedure
gradually taking longer and longer, the runtimes immediately grew longer
after the recent switch in back-end storage.

Here are some of the vitals
degree is set to 1 on all relevant tables and indexes.  we're not using
parallelism at all during this proc.

10.2.0.4 on linux
storage was changed from single-channel SCSI disks to a SAN with 4 fiber
paths.

PNAME     PVAL1
------------------------------ ----------
CPUSPEEDNW        1371.36113
IOSEEKTIM        10
IOTFRSPEED      4096
SREADTIM
MREADTIM
CPUSPEED
MBRC
MAXTHR
SLAVETHR

NAME      TYPE  VALUE
------------------------------------ -----------
------------------------------
optimizer_dynamic_sampling      integer  2
optimizer_features_enable      string  10.2.0.4
optimizer_index_caching       integer  0
optimizer_index_cost_adj      integer  100
optimizer_mode      string  ALL_ROWS
optimizer_secure_view_merging      boolean  TRUE
db_file_multiblock_read_count      integer  16
db_file_name_convert      string
db_files      integer  200

Other related posts: