RE: DWH varchar2(4000)

  • From: Jonathan Lewis <jonathan@xxxxxxxxxxxxxxxxxx>
  • To: "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 22 Dec 2014 17:53:11 +0000


That may be version dependent; do you have any links to demonstrations ?
As far as sorting is concerned I've just done a quick test that showed the 
memory requirement was dictated by the used length not the declared length (in 
12c).



Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
@jloracle
________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [oracle-l-bounce@xxxxxxxxxxxxx] on behalf 
of Xiang Rao [xiang_rao@xxxxxxxxxxx]
Sent: 22 December 2014 17:42
To: ricard.martinez@xxxxxxxxx; Martin Preiss
Cc: oracle-l@xxxxxxxxxxxxx
Subject: RE: DWH varchar2(4000)

One issue with varchar2(4000) columns is, when these columns are included in 
large aggregations, sorting, or even hash join, you could run into problem with 
memory usages and temp space usages.

Sent from my Windows Phone
________________________________
From: Ricard Martinez<mailto:ricard.martinez@xxxxxxxxx>
Sent: ‎12/‎22/‎2014 12:04 PM
To: Martin Preiss<mailto:mtnpreiss@xxxxxx>
Cc: oracle-l@xxxxxxxxxxxxx<mailto:oracle-l@xxxxxxxxxxxxx>
Subject: Re: DWH varchar2(4000)

Thanks for that Martin.
Pretty interesting and disturbing at the same time.

Regards

On Mon, Dec 22, 2014 at 4:44 PM, Martin Preiss 
<mtnpreiss@xxxxxx<mailto:mtnpreiss@xxxxxx>> wrote:
Ricard,

another point that could result in difficulties has been mention by Nicolay 
Savvinov recently: rows with a potential row length (i.e. the addition of data 
type sizes of the different columns) that is bigger than the block size result 
in a row-by-row processing and supress bulk operations: 
http://savvinov.com/2014/12/15/4k-bug-is-not-a-bug/.

Regards

Martin Preiss

Am 22.12.2014 16:01, schrieb Ricard Martinez:
Hi list,

We have a DWH database on 11.2.0.4.
The developers/data modelers are creating all the tables with varchar2(4000) as 
standard by default. They don't care if they just insert 10 bytes, 200  or 
nothing.
When asked about why this standard is in place, the answer is just: "Just in 
case we need it".

For my point of view this is not a good practice, as can cause memory waste on 
client applications, as well as data garbage or errors like ORA-01450: maximum 
key length (3218) exceeded.

But as i'm not a DWH expert, maybe im missing information that proves that this 
is really a good practice, and not just a laziness way of working.

I will be glad to know your opinion, or point me to any doc/test that proves me 
wrong, in order to learn about this.

Thanks


Other related posts: