Re: DWH varchar2(4000)

  • From: Martin Preiss <mtnpreiss@xxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx, ricard.martinez@xxxxxxxxx
  • Date: Mon, 22 Dec 2014 17:44:08 +0100

Ricard,

another point that could result in difficulties has been mention by Nicolay Savvinov recently: rows with a potential row length (i.e. the addition of data type sizes of the different columns) that is bigger than the block size result in a row-by-row processing and supress bulk operations: http://savvinov.com/2014/12/15/4k-bug-is-not-a-bug/.

Regards

Martin Preiss

Am 22.12.2014 16:01, schrieb Ricard Martinez:
Hi list,

We have a DWH database on 11.2.0.4.
The developers/data modelers are creating all the tables with varchar2(4000) as standard by default. They don't care if they just insert 10 bytes, 200 or nothing. When asked about why this standard is in place, the answer is just: "Just in case we need it".

For my point of view this is not a good practice, as can cause memory waste on client applications, as well as data garbage or errors like ORA-01450: maximum key length (3218) exceeded.

But as i'm not a DWH expert, maybe im missing information that proves that this is really a good practice, and not just a laziness way of working.

I will be glad to know your opinion, or point me to any doc/test that proves me wrong, in order to learn about this.

Thanks

Other related posts: