RE: db file sequential read - again

  • From: "Bobak, Mark" <Mark.Bobak@xxxxxxxxxxxxxxx>
  • To: <jherrick@xxxxxxx>, "Wolfgang Breitling" <breitliw@xxxxxxxxxxxxx>
  • Date: Tue, 13 Mar 2007 14:31:42 -0400

Hi JH,

I went through an exercise a while ago where I tried to determine which
segments were candidates for a KEEP or RECYCLE pool.

The query I came up with is:
  select vss.owner,
         vss.object_name,
         vss.statistic_name,
         vss.value,
         ds.bytes segsize,
         ds.buffer_pool
    from v$segment_statistics vss,
         dba_segments ds
   where vss.statistic_name ='physical reads'
     and vss.value > 5000000 ---You may need to play with this threshold
value for your environment
     and ds.segment_type = vss.object_type
     and ds.segment_name = vss.object_name
     and ds.owner=vss.owner
     and ds.buffer_pool = 'DEFAULT'
order by value
/

Now, this helped me identify the segments that had the largest levels of
physical IO, and the size of the segment.  The idea is, "show me which
segments are in the DEFAULT pool and are doing the largest numbers of
physical reads, and show me how big those segments are."

Also, before running the above query, to 

Next, ask yourself these questions:
 - How much memory do I have to dedicate to KEEP and/or RECYCLE pools?
 - Of the segments returned from the query above, which ones are small,
and easy to entirely cache?  These are candidates for KEEP pool.
 - Which are entirely too large to consider caching?  These are
candidates for RECYCLE pool.

Once you've got the new pools sized and setup, let the system run a
while, do some monitoring, see if things improve.  Check
V$DB_CACHE_ADVICE to see if Oracle thinks any of the pools should be
resized.

I had some good success with implementation of KEEP and RECYCLE buffer
pools.  Hope it goes as well for you as it did for me.....

Hope that helps,

-Mark

--
Mark J. Bobak
Senior Oracle Architect
ProQuest/CSA

"There are 10 types of people in the world:  Those who understand
binary, and those who don't."

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of jherrick@xxxxxxx
Sent: Tuesday, March 13, 2007 11:40 AM
To: Wolfgang Breitling
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: db file sequential read - again

Quoting Wolfgang Breitling <breitliw@xxxxxxxxxxxxx>:

>
> What is your rationale of putting "hot indexes" into their own pool?
> What are you trying to accomplish with this?
>

Thanks for the info Wolfgang.

I guess I was just thinking that if high-usage index blocks were in
their own pool they wouldn't be aged out of the main (DEFAULT) pool. If
they were in the pool then they would not have to be read from disk. I
guess this would have to be tempered with how often the indexes are
modified.

In typing this response though I see the error in my thinking....if they
are hot enough then they shouldn't be aged out of the default pool. So
segregation is not going to help.

Would simply caching them in the default pool accomplish the same thing
then?
Or simply increasing the size of the main pool? I have 16Gb of memory
and the buffer cache is currently taking up 6Gb.

BTW...haven't had a chance to look at STATSPACK yet. TIMED_STATISTICS
was set to false when I arrived last week and scheduling a bounce is
problematic. So I'm looking at 'live waits' right now until I can gather
some more useful info.

Cheers

JH

--
//www.freelists.org/webpage/oracle-l


--
//www.freelists.org/webpage/oracle-l


Other related posts: