RE: Bigger block sizes

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <oraclealchemist@xxxxxxxxx>, "'Tim Gorman'" <tim@xxxxxxxxx>, <andrew.kerber@xxxxxxxxx>
  • Date: Thu, 1 Oct 2015 14:39:06 -0400

AND…



None of the current discussion matches up well with some literature you might
come across from “ancient” times when Oracle’s default was 2K. For reasons
related to all of: getting rows to fit in a single block, increase of the
absolute additional room from default pctfree mitigating row migration and row
chaining, index depth, and fixed maximum numbers of extents based on blocksize,
it was often easy to improve performance dramatically via the shotgun of an
increase to 8K, 10K, or 16K (from 2K). You might even come across a blanket
recommendation from me to use the largest block size supported on your platform
for big databases.



While there is no reason to avoid experimentation for test and there are likely
cases where improvement will result from an increase from 8K, most of the
“ancient” shooting fish in a barrel type reasons are simply removed with the
starting default point at 8K and the modernization of Oracle’s extent
management. And you would want any improvements to be significant to take upon
yourself manual buffer cache size management Oracle omitted from automation
when you have multiple sizes and subject yourself to less exercised bits of the
code path.



mwf



From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On
Behalf Of Steve Karam
Sent: Thursday, October 01, 2015 12:35 PM
To: Tim Gorman; andrew.kerber@xxxxxxxxx
Cc: ORACLE-L
Subject: Re: Bigger block sizes



Andy,



That thread (re: the bug) has a tangled mess of comments at the bottom best
left archived.



But yes, IIRC the client requested we try a 16kb block size and it drastically
improved performance; however, this wasn’t due to the blocksize per se, it was
due to a bug in ASSM that affected 9i-11g at the time (not sure where it is
now) when non-default block sizes were used in conjunction with particular
PCTFREE settings. In the end the customer declared it a “fix” because of the
time it would take to resolve the bug another way. I’ve not seen a definitive
non-bug-induced case where changing the block size for performance provides a
guaranteed tangible benefit, particularly one that would make the risk/unknowns
of a non-default block worthwhile.



Tim said it best: are you prepared to test all these factors?



Steve Karam

OracleAlchemist.com

On October 1, 2015 at 12:27:10 PM, Andrew Kerber (andrew.kerber@xxxxxxxxx)
wrote:

Tim can always be counted on as the voice of rationality. Here is the link I
was referencing by the way. There are several discussions on the subject on
asktom, and as I recall Steve Karam ran into an Oracle bug with i/o on some
block sizes a few years ago. I think it has since been fixed:


https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5942798800346009065



On Thu, Oct 1, 2015 at 11:20 AM, Tim Gorman <tim@xxxxxxxxx> wrote:

Franck's post is great, but doesn't cover some other factors that impact the
decision.

Consider a RAC database with a lot of cross-instance communication.
Cross-instance communication is clearly dependent on how the application was
designed, configured, and deployed. TCP jumbo frames are recommended for the
private interconnect in RAC because it raises the maximum TCP packet size from
the default of 1500 to 9000 (or a little less considering packet headers).
When DB_BLOCK_SIZE is 8192, then clearly MTU=9000 is optimal.

Now just because the DB_BLOCK_SIZE changed to 16384 or 32768 doesn't mean that
MTU=9000 is not a good thing, just not as good as before.

Is MTU <= 9000 a problem? Increasing MTU size doesn't increase native network
throughput, but it does cut down on the processing by the server to packet-ize
and un-packet-ize, thus it does decrease CPU utilization. Having
DB_BLOCK_SIZE=8192 makes Oracle block slide nicely into TCP packets with a
minimum of effort. Having to packet-ize and un-packet-ize is only a matter of
expending more CPU.

Is CPU utilization a problem? Only if your servers are maxed out for CPU. If
they aren't, then no problem. If they are, then you have a serious problem.

The upshot is: are you prepared to test all these factors (and others not
mentioned)?

If the answer is yes, then have at it, and good luck.

If you're just looking for easy optimizations, then this isn't it. There are
far more effective optimizations to consider.

My US$0.02...






On 10/1/15 10:02, Stefan Koehler wrote:

Hi Orlando,
it depends (as always).

Franck Pachot has written a great blog post with several demos about this
topic: http://blog.dbi-services.com/do-the-block-size-matter/
Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK


Orlando L <oralrnr@xxxxxxxxx> hat am 30. September 2015 um 23:29 geschrieben:

List,
Does anyone in the list use non default blocksize of greater than 8K for
your oracle DBs; if so, is it for warehousing/OLAP type applications?
What advantages do you get with them; any disadvantage.
Orlando.

--
//www.freelists.org/webpage/oracle-l







--
//www.freelists.org/webpage/oracle-l






--

Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

Other related posts: