Re: block size

  • From: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • To: niall.litchfield@xxxxxxxxx
  • Date: Mon, 26 Oct 2009 13:31:58 -0700

On Mon, Oct 26, 2009 at 11:32 AM, Niall Litchfield
<niall.litchfield@xxxxxxxxx> wrote:
> so, discussion with DW architect today
>
> "but this isn't OLTP,this is DW, in a datawarehouse you want the biggest 
> block size you can have".
>
> my question to you (and him)
>
> "why? What's the theory and evidence for this?"
>
> In other words *why* do people think this?

Lets start with the last question first:  People think this probably because of:
1) they think better is better for an unknown, unquantified reason
2) someone somewhere on the Internet wrote it, so it must be true

The reality (facts) of it is this:
- a larger block *may* give a slightly better compression ratio as
more values are packed within a block and Oracle uses block level
compression
- in a DW where physical table/partition scans are prevalent, the IO
size matters most, not the block size (Oracle can do 1MB reads no
matter what the block size)
- 8k blocks are the default and more tested value and block size can
influence execution plan and most people know that a suboptimal
execution plan will easily offset any *potential* gains from a non-8k
block
- with RAC an 8k block fits within a jumbo frame (MTU 9000) and that
helps reduce system time for interconnect traffic (bigger blocks will
get split)

When people make claims on something, I generally put my engineering
hat on and ask, "What data did you look at to draw those conclusions
(and can I see it too)?".  That question alone is enough to end many
(silly) discussions.

-- 
Regards,
Greg Rahn
http://structureddata.org
--
//www.freelists.org/webpage/oracle-l


Other related posts: