Re: db file scattered reads

  • From: K Gopalakrishnan <kaygopal@xxxxxxxxx>
  • To: big.dave.roberts@xxxxxxxxxxxxxx
  • Date: Sun, 5 Dec 2010 10:06:54 -0600

Hi All-

Depending on the version of the database, some of the tweaks might no longer
work.  One of the optimization in 11g is to treat  most of the full scans
(read: LARGE)  are read directly to PGA of the user session (unless you use
event 10949 to disable it) and you will see direct path read instead of a
scattered read waits. The definition of small or large table is controlled
by an underscore parameter_small_table_threshold and I guess this still
defaults to 2% of the cache.

If you can not avoid FTS for those tables, here is what I would consider (!)

A. Pack those blocks (either compression or low pctfree or ctas or whatever)
so that we read less blocks. Be warned that there is an overhead on CPU if
you use compression or lock escalation from row-level to block-level if
there is no space for additional ITLs ,etc
B. Introduce partitions. With partition pruning this will reduce the number
of blocks we hit during the scans or the table can be paralleled (consider
hash ,if you can not get the right partitioning strategy)
C. Multiple buffer pools also work sometimes, but read the first para if
that doesn't work.
D. If you can not avoid table scans, consider exadata ;). This will offload
the tablescans to storage cells and you will have an opportunity to add
exadata to your resume !

Happy Holidays !

-Gopal


On Sun, Dec 5, 2010 at 9:14 AM, David Roberts <
big.dave.roberts@xxxxxxxxxxxxxx> wrote:

> You have correctly identified profiles as a potential solution. Another
> poster has also suggested, buffering strategies, using the keep pool
> to achieve that goal.
>
> Another responder as suggested  manipulating the indexes, which, with
> negotiation with your supplier you should consider.
>
> Between these suggestions are either real or artificial manipulation of the
> index statistics.
>
> At the positive end, this could be a re-organisation of the index (through
> an ordered reorganisation of the table (by CTAS) to make the use of a good
> index cheaper.) or by the manipulation of oracle statistics
> as recommended by Wolfgang.
>
> OTOH, if the dataset is large, you may need to confider partitioning.
>
> Dave
>
> PS, you may find only a minority of participants on this
> list recommend publications by Rampant press.
>
> My personal preference is:
> Troubleshooting Oracle Performance
> by
> Christian Antognini
>
> On Wed, Dec 1, 2010 at 9:00 PM, Tim Gorman <tim@xxxxxxxxx> wrote:
>
>>  Lyall,
>>
>> Regarding your comment that this is a "off-the-shelf application that you
>> can't touch":  adding custom-built indexes generally does not violate
>> support for 3rd-party off-the-shelf applications.  The main requirement is
>> usually that you build them in such a way that they are easily identifiable,
>> easily dropped and re-created, and not created so as to disrupt product
>> patching, etc.  I'm sure lots of folks on this list have lots of advice on
>> this, covering a wide variety of COTS apps.  Perhaps someone on this list is
>> managing the same application as well?
>>
>> Please verify with your application vendor on their policies for support
>> for custom-built indexes?
>>
>> Hope this helps...
>>
>> Tim Gorman
>> consultant -> Evergreen Database Technologies, Inc.
>> postal     => 13087 West Cedar Drive #225, Lakewood CO 80228
>> website    => http://www.EvDBT.com/
>> email      => Tim@xxxxxxxxx
>> mobile     => +1-303-885-4526
>> fax        => +1-303-484-3608
>> Lost Data? => http://www.ora600.be/ for info about DUDE...
>>
>>
>> On 12/1/2010 1:04 PM, lyallbarbour@xxxxxxxxxxxxxxx wrote:
>>
>> Hello everyone,
>>   From what i understand in my reading of this wait event, it happens the
>> most with Fast Full Table Scans.  As we all know, FFTS's aren't necessarily
>> a bad thing.  Looking through ADDM's and ASH reports from one of our
>> databases, db file scattered reads on one table is the main culprit for the
>> bottlenecks on this system.  This is an off the self application that i
>> can't touch, so the only SQL tuning i can do is with SQL Profiles.  Is there
>> anything i can do under the scenes to release some of this bottleneck?  Move
>> the table into it's own tablespace?  Change the storage settings?  That's
>> one thing i can't find through google, is what to do if you get db file
>> scattered reads.  I'm still waiting for my tuning guide from Rampant, it's
>> on backorder...
>>
>> Thanks,
>> Lyall
>>
>>
>>   -- //www.freelists.org/webpage/oracle-l
>
>
>

Other related posts: