RE: Any performance benefits in going to db_16k_cache_size or db_32k_cache_size

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <arunchakrapanirao@xxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 10 Jan 2007 12:53:26 -0500

I agree with the Cary Millsap/Mark Powell/Tom Kyte thread. Times are
different from when Oracle's "best practices" recipe was to primarily use 2K
block size databases, fast index scans didn't exist, and you could
dramatically reduce the stable number of levels in an index tree by simply
moving to a larger block size. So old quotes laying around and often used
currently without attribution or the context of the times to use the largest
block size available for large databases represent (IMHO) correct thinking
at the time that is now outdated.

That said, let's consider your apparent case with an eye toward what cases
to focus your measurements on, under the assumption that an all cases
enumeration is not practical.

Let's assume for the moment that concentrating your data denser will not
cause a concurrent insert/update problem of any significance based on your
statement that you are 90% sequential reads.

So far in your benchmark you have apparently only messed with indexes, and
you were surprised at no improvement in elapsed time. For starters, you will
only even theoretically have a chance for big improvement if the move to the
larger blocksize makes a stable reduction in the height of the tree of the
index. If you are already set up to benchmark with varying block sizes for
the indexes in question, you can do the laugh test on that pretty quickly -
if the height of the newly built indexes at the larger block size does not
go down, you are limited to the the slight increase in density of the leaf
blocks (due to a lower ratio of block overhead to block contents at larger
block sizes) for any possible benefit. And unless you're frequently
traipsing down the index tree rather than scanning across the leaf blocks,
you're limited to the slight increase in density to supply any possible
benefit anyway. If you have such queries, those would be the ones that could
possibly show big improvement, so those are the ones to measure if they are
a significant fraction of your load to see whether further research on the
overall effect on your actual data is even justifiable.

Any entirely separate dimension is the frequency you are reading from cache
versus the disk farm and whether any level in the physical retrieval of
blocks injects a significant penalty for the double or quadruple size blocks
as your disk farm is currently configured.

If you move into testing table block sizes and maximum density of rows is
not a concurrent update problem, whether you are limited at best to a small
proportional gain related to the increase in block contents to block
overhead depends on whether you have individual rows that will not fit in a
single block of the smaller size but will fit in a larger size. You don't
even need to rebuild to find that out, but rather just count the rows having
a sum of the vsizes of all the columns greater than the usable block space
after overhead at your parameters and a given block size.

Now there are valid motivations to use larger block sizes, but I don't
forecast there are a lot in your case. If you have some queries where nearly
all the columns you need are in the index used for the query (and
remembering that you are 90% read so ignoring the update overhead for an
extra index) then adding the remaining columns for a new index might be of
some help (since you won't have to read the table's block). Now if you
needed to move to a larger index block size in order to accommodate those
columns in the leaf, that might be a reason. Aside from designing the
insert/update/delete system that feeds your DSS so that there is cohesion
between row selectivity and block selectivity, adding columns to indexes is
probably the most effective thing you can do to increase throughput after
you have eradicated bad query plans. (I can't wait to see that referenced
without the "ignoring the update overhead" caveat...)

I hope this gives you some focus for your tests.

Regards,

mwf



-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]On
Behalf Of arun chakrapani rao
Sent: Tuesday, January 09, 2007 11:40 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: Any performance benefits in going to db_16k_cache_size or
db_32k_cache_size

Hi all,
I am currently working on a 1.5T DSS running on 8k blocksize.
This db is 90% of the time doing sequential reads.
Was wondering if we go for a 16k or 32k blocksize just on index alone
would there be any benefits in performance.
Has anybody impletemented this and seen any performance benefits,
Please do share your experience.
I was trying to do some bench marking here for one of the queries with
8,16 and 32k index tablespaces.
All I am seeing is the logical reads going down by half but the
elapsed time for these queries are still the same.
Now what am i missing?
Please do share your thoughts.

--
thanks in advance.
Arun
--
//www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l


Other related posts: