RE: Query performance issue

  • From: "Schauss, Peter" <peter.schauss@xxxxxxx>
  • To: "Mark W. Farnham" <mwf@xxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 4 Jun 2008 12:41:26 -0500

Mark,

The 10042 trace shows that almost all of the 28 minutes is accounted for
by the sequential read waits.  How do I measure the honeycombed and
"empty front" conditions?

Thanks,
Peter Schauss 

-----Original Message-----
From: Mark W. Farnham [mailto:mwf@xxxxxxxx] 
Sent: Wednesday, June 04, 2008 12:10 PM
To: Schauss, Peter; oracle-l@xxxxxxxxxxxxx
Subject: RE: Query performance issue

The 50% extra in row count is very unlikely to account for the 10-fold
difference. IF there is a 10-fold difference in the number of blocks
holding rows, then you've just observed one an actual justification for
rebuilding a table that has become honeycombed or has an extensive
"empty front." Since the big table is index accessed, honeycombed would
be more likely than "empty front." If that turns out to be true, you
might consider also whether the rebuild would be well-served being in
the order of the index used in this query.

IF there is not a corresponding 10-fold difference in the number of
blocks required to fetch the rows you need, then the next most likely
thing is that your QA server's disk farm is mostly serving this query
while your production query must serve all the many simultaneous
requests for this database and any other databases being served by the
production SAN that are entangled with the storage of the production
database. Or if production updates are taking place you could be driving
a lot of read consistent block retrievals from UNDO that don't take
place in the relatively quiescent QA database. You'd have to test for
that difference running the production query against a relatively idle
time on prod with respect to updates to the tables involved.

I just realized I leaped to a time presumption: you noted the difference
in counts of waits. Is the wait time difference from this event the
biggest difference and a significant fraction of the 26 minute
differential you are looking for? IF NOT, look at the biggest time
differential.

It is also probably worth the few minutes it will take to count the
block gets from a full table scan of each of the tables that are
scanned.
Infrequently (but potentially disasterously) 8.1 and prior databases
created an empty front condition such that many blocks are scanned
before you get to the first row in the table. If QA was created from an
import or other non-clone copy, the problem would automagically not be
present in QA. A scan where rownum < 2 is sufficient to detect a truly
empty front, while a scan with no stopkey gets you the full honeycomb
and empty information to the highwater mark.

Regards,

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Schauss, Peter
Sent: Wednesday, June 04, 2008 10:08 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: Query performance issue

Oracle 8.1.7.4 - Solaris 5.9 - Cost based optimizer

I have a query which takes 28 minutes to run on my production system and
about a two minutes to run on my QA database.

The query looks like this:

<snip>
Row counts for w_day_d, w_lov_d, and w_region_d within 10% of each other
on both databases.
The row count for w_activity_f is 3x10**6 on production and 2x10**6 on
qa.

The 10046 trace for production lists 3x10**5 db file file sequential
read waits while the trace for QA lists 2x10**4.

Would the 50% difference in the size of my w_activity_f table be
sufficient to account for the 10-fold increase in the number of
sequential read waits and the corresponding 10-fold difference in run
time or should I be looking for hardware or other issues?

Thanks,
Peter Schauss
  
--
//www.freelists.org/webpage/oracle-l




--
//www.freelists.org/webpage/oracle-l


Other related posts: