RE: Bigger block sizes

  • From: Jonathan Lewis <jonathan@xxxxxxxxxxxxxxxxxx>
  • To: "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 2 Oct 2015 07:52:10 +0000



I'm a little surprised by the scale of the improvement.

It would have been interesting to find out where the difference came from, I
would expect some time from the increased latch activity of having to populate
4 times as many buffers, and some write effects from having to write 4 times as
many buffers, and if you didn't set the chunksize to 32K for the 8KB block test
there would be some excess in having 4 x extra calls to db file sequential read
compared to one db file parallel read.

I wonder if it's related to something I've had on my todo list for the last 11
years - why creating a context index on bfiles takes MUCH longer than creating
a context index on the same data when it has been loaded as CLOBs: I have a
note to myself that there was a "missing time" component that might be
something to do with bfile seeks - maybe your 20% was something to do with
fewer seeks when reading 32KB at a time. I may have to dust the test case down
and bring it to the top of my list :(




Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
@jloracle
________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [oracle-l-bounce@xxxxxxxxxxxxx] on behalf
of Sheehan, Jeremy [JEREMY.SHEEHAN@xxxxxxxxxxxxxxxxx]
Sent: 01 October 2015 22:53
To: Kevin Jernigan; contact@xxxxxxxx; oracle-l@xxxxxxxxxxxxx; oralrnr@xxxxxxxxx
Subject: Re: Bigger block sizes

Basic files about 8 years ago.

Sent from Nine<http://www.9folders.com/>

From: Kevin Jernigan <kevin.jernigan@xxxxxxxxxx>
Sent: Oct 1, 2015 4:59 PM
To: Sheehan, Jeremy; contact@xxxxxxxx; oracle-l@xxxxxxxxxxxxx; oralrnr@xxxxxxxxx
Subject: Re: Bigger block sizes

With BasicFiles or SecureFiles? -KJ

--
Kevin Jernigan
Senior Director Product Management
Advanced Compression, Hybrid Columnar
Compression (HCC), Database File System
(DBFS), SecureFiles, Database Smart Flash
Cache, Total Recall, Database Resource
Manager (DBRM), Direct NFS Client (dNFS),
Continuous Query Notification (CQN),
Index Organized Tables (IOT), Information
Lifecycle Management (ILM)
+1-650-607-0392 (o)
+1-415-710-8828 (m)

On 10/1/15 9:17 AM, Sheehan, Jeremy wrote:

Orlando,

I did testing and implemented a BLOB tablespace with 32K blocksize and I saw
about a 20% increase in performance when saving and retrieving files into a
blob column.

Jeremy


-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On
Behalf Of Stefan Koehler
Sent: Thursday, October 01, 2015 12:03 PM
To: oracle-l@xxxxxxxxxxxxx; oralrnr@xxxxxxxxx
Subject: Re: Bigger block sizes

This is an EXTERNAL email. Exercise caution. DO NOT open attachments or click
links from unknown senders or unexpected email.

________________________________


Hi Orlando,
it depends (as always).

Franck Pachot has written a great blog post with several demos about this
topic: http://blog.dbi-services.com/do-the-block-size-matter/

Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK

Orlando L <oralrnr@xxxxxxxxx> hat am 30. September 2015 um 23:29 geschrieben:

List,

Does anyone in the list use non default blocksize of greater than 8K for
your oracle DBs; if so, is it for warehousing/OLAP type applications?
What advantages do you get with them; any disadvantage.

Orlando.
--
//www.freelists.org/webpage/oracle-l


��i��0���zX���+��?n��{�+i�^l===

Other related posts: