RE: Performance issue after creating higher block size tablespace

  • From: "Oliver Jost" <Oliver.Jost@xxxxxxxxxxxxxx>
  • To: <saad4u@xxxxxxxxx>, "Mark W. Farnham" <mwf@xxxxxxxx>
  • Date: Thu, 25 Feb 2010 14:24:25 -0500

Just some two-bits on this one.  If you are doing massive inserts are you 
spending a lot of time allocating space?  If so, your rowcache may be very busy 
allocating more space.  You could pre-allocate some space to the segment and 
increase the size of the next to accommodate future growth.
 
Good luck,
   Oliver

________________________________

From: oracle-l-bounce@xxxxxxxxxxxxx on behalf of Saad Khan
Sent: Thu 2/25/2010 12:39 PM
To: Mark W. Farnham
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: Performance issue after creating higher block size tablespace


No the query plan didnt change, the cost changed very marginally.

Yes, i rebuild the indexes after moving the tables.

This is 10g, so it gathers stats on base of skewness. The last_analyzed column 
for this tables shows yesterday.

Its the joins that are killing the query i think. I'm going to send the queries 
to list .

DB_CACHE_SIZE was set to 0 as the sga_target is set here which  is 1.7 GB (its 
a 32-bit installation)

I couldnt understand your last message. May be if you elaborate it more. 

Thanks 



On Thu, Feb 25, 2010 at 12:10 PM, Mark W. Farnham <mwf@xxxxxxxx> wrote:


        Did your query plan change?

        Did you also rebuild the indexes after you moved the tables?

        Gather statistics? Are they a big change from the last time you 
gathered statistics?

        Is the new tablespace in files that occupy comparable underlying 
volumes in terms of I/Os supported per unit time?

        Is your storage in some flavor of SAME, or were the tables being 
selected from formerly on independently operating units of i/o (especially from 
the insert target) and now you've lumped them all together?

         

        What was your db_cache_size before?

        Are you memory lean on the  machine and using filesystems? Have you 
robbed the OS of file caching space by adding to the SGA size?

         This is a very critical app and I dont want to rebuild "whole" thing. 
Moving a bunch of tables is entirely different ofcourse.
        

             


     


        

        Those are all bits of a partial change analysis you might do, not that 
you've stepped in it. If one or more of them is on target (measure, don't 
guess) then you might have a shortcut out of your problem. Others might add to 
the list.

         

        Now if you had a time machine, I'd say get in it and measure things to 
evaluate what (if any) performance benefit there was to be expected if you 
could get i/o service time to zero by moving to 8K. Then, if that idealized 
ceiling of possible benefit was significant, figure what the likely benefit was 
if everything meshed in your favor with no side effects. Then, if that still 
seemed worthwhile, plan and engineer the move so that you ruled out in advance 
negative side effects. (And I'm wondering why not rebuild the whole thing at 8K 
if the database block size was measured to predict an advantage.)

         

        So what to really do now? See where the time is going. One often useful 
bit of information is routing the output of the select to dev/null and seeing 
how long that takes. If the lion's share of your time is in the select, fix 
that. Likewise, if you queue up the results of the select in a single table and 
just select from there and insert into the destination, does that reveal a 
bottleneck on the insert side?

         

        Before you would move back, you would want to have some evidence that 
moving back would eliminate some problem. Unless of course the urgency now is 
such that just getting back where you were right away is more important than 
minimizing the amount of work to reach better performance. Then you could 
pretend you went through the time machine, figure out where your time is going 
and attack the problem from that standpoint.

         

        Regards,

         

        mwf

         

        
________________________________


        From: oracle-l-bounce@xxxxxxxxxxxxx 
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Saad Khan
        Sent: Thursday, February 25, 2010 11:24 AM 

        To: oracle-l@xxxxxxxxxxxxx
        Subject: Performance issue after creating higher block size tablespace

        

         

        Hi gurus. 


        I've a production database running on oracle10.2.0.3 at SUSE linux 10. 
The default DB_BLOCK_SIZE for the database is 4K. 
        
        There was a performance complain coming from the users and developers 
asked me to look into that. They particularly complained about one stored 
procedure that was taking too much time. Now when I looked into the stored 
proc, I saw the insert statement in one particular table which is something 
more than 4 million rows  while selecting from a bunch of other tables.
        
        So what I did, I created a new tablespace with the db_block_size 8K and 
moved all the tables that were used in that SP in the new tbs. 
        
        And guess what, the new response came after that showed its taking 
almost double the time as it was taking earlier. The AWR report shows a lot of 
user IO activity and the tablespace that is hit most is the new one. Now is it 
due to the different block size for this new tablespace? Is Oracle finding it 
hard to manage 8k blocks inside the SGA designed for 4K originally?
        
        The db_cache_size is set to 8192 and db_8k_cache_size is also set to 
8192.
        
        Is there any other step I can take? I dont want to revert it back to 4k 
, I think it should work.
        
        Any suggestions?
        
        Thanks in advance.

        


Other related posts: