RE: Bigger block sizes

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <orahawk@xxxxxxxxx>, <oralrnr@xxxxxxxxx>
  • Date: Sun, 18 Oct 2015 12:39:13 -0400

1) When the default blocksize was 2K, it was somewhat trivial to produce
significant improvements in both performance and administration by moving to 8K
or 16K for a plethora of reasons. Often the improvements could be measured with
the minute hand or hour hand on the clock, so very few reported precisely
measured improvements. Most of the rebuilds were required by approaching near
enough to be scared about maxextents limits that used to vary by block size.

2) Going between 8K and larger block sizes, together huge numbers of
allowable extents and the likelihood that sane row sizes and indexes fit more
conveniently and indexes trivially shallow unless your table is truly gigantic
(in which case some form of partitioning and partition pruning is probably
called for), a careful study is required to dither out whether there is in fact
any improvement, let alone an improvement worth rebuilding a production dataset
to glean.

3) Many false positives on improvement came from:

a. fill up a production table gradually over time with both lengthening
updates and occasional deletes, quite possibly having what might be the best
index access much of the time having a crappy cluster factor, and eventually
things are slow enough to call someone in to investigate.

b. Run a test

c. rebuild everything into a bigger block size, coincidentally minimizing
row migration and chaining and possibly dramatically improving one or more
index cluster factors

d. Run a test, declare the bigger block size to be an improvement

e. Get questioned on the details, rebuild a test into the original blocksize

f. Run a test, see that the artifacts of the rebuild followed immediately
by the test generated the gain.

4) Do not abstract from this that all rebuilds are good in and of
themselves. It is possible from time to time that a periodic rebuild helps, and
the trick is to figure out how perishable the artifacts of the rebuild are.
Many rebuilds generate a very temporary performance improvement (some none at
all). Thinking about why a particular rebuild is promising and whether it will
be highly perishable is important. Some productive rebuilds cure an
accumulation bug that has since been fixed by Oracle. I won’t even start the
bit about bit maps (read Foote, Lewis, Antognini and their links to others.) So
should you rebuild? IT DEPENDS. Rebuild in a bigger blocksize? IT DEPENDS.



mwf





From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On
Behalf Of Dragutin Jastrebic
Sent: Sunday, October 18, 2015 6:22 AM
To: oralrnr@xxxxxxxxx
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: Bigger block sizes



Hello



This topic comes back from time to time, and has been a subject of somehow
emotive discussions in the past.



There was another discussion started on Asktom and then followed on Oracle's
OTN about 8 years ago.



https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:44779427427730#463291000346484016



https://community.oracle.com/thread/583865?start=15
<https://community.oracle.com/thread/583865?start=15&tstart=0> &tstart=0



etc.



Since I feel a little bit responsable for this, let me say a couple of words.



Eight years ago my DBA friend and I were playing on a test platform with call
detail records database and had an idea to test bigger blocksize tablespace by
puting a table and index of historical records in a 16K tablespace. My friend
was inspired by the example by Robin Schumacher, that was mentioned mentionned
in the book by Donald Burleson. The first test were suprising indeed, the
execution time was faster, and the number of logical reads were almost halved.
Knowing that there was an ongoing discussion on the Asktom thread, he hurried
up to put it there, hiding behind "The reader from Russia" pseudonym.



Tom was not convinced, and asked to see the tkprof trace.



Infortunately, we really had a lot of work at the time, and the test database
was refreshed by the new version, and the old data was not available anymore.
A week later, I have tried the same test,with I was supposed to be almost

the same data sample , but I was simply unable to reproduce the case - this
time both tests were giving the same results, same execution time, same logical
reads statistics and so on.



Being unable to provide any further information, my DBA friend did not provide
any followup on Asktom.



At the time, I have never posted any comment on any forum, I have been reading
them occasionnaly.



I did not even see that Donald Burleson immediately took our example and put it
on his website , as " another evidence of the benefits of the bigger
blocksize" !



It was only about 2 or 3 years later that I have discovered how much
discussion, between Richard Foot, Jonathan Lewis , Donald Burleson and the
others was following our exemple ...



If only I knew it before, I would certainly worked further on this testcase.



Thinking later about it, I believe that, for some reason, the execution plans
were different, so there was a full scan with the slower test (with 8K
tablespace) and an index range scan with the faster test (16k tablespace),that
is the only explanation.



And talking about the example of Robin Scumacher, Richard Foot explained on the
OTN thread that the number of logical reads was related to the way Oracle is
showing them with different block sizes, but it is not related to the
performance.



So, as far as I know, still nobody provided a good test case to prove that
there is a true performance benefits of the bigger blocksize.



If anyone is facing such a situation, he is really invited to publish it !



And I don't think that Exadata has to do something with bigger blocksizes,
since it does not even exchanging blocks, only rows.





My 2 cents



Dragutin





Other related posts: