RE: Characterset question?

  • From: "Justin Cave (DDBC)" <jcave@xxxxxxxxxxx>
  • To: <tim@xxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 10 Nov 2005 17:02:34 -0700

If you are not planning on using NCHAR/ NVARCHAR2 columns anyway (and
why would you if they used the same character set as CHAR/ VARCHAR2
columns), I can't see any benefit to having the database and national
character set match.  Probably no downside, though.

In a Unicode database, though, your database character set has to be a
strict binary superset of ASCII, so you'll have to use UTF8 (or
AL32UTF8).  This is a variable-width character set, so English
characters generally require 1 byte to encode, European characters
generally require 2 bytes to encode, and Asian characters generally
require 3 bytes to encode (I'm ignoring the Unicode 3.1 oddballs).  The
National character set, on the other hand, is generally a UTF16
character set, which means that every character requires 2 bytes of
storage.  This means that Asian text will require 50% more space in a
VARCHAR2 column than it would in an NVARCHAR2 column, which can be a
hefty penalty.  Searching a fixed-width string tends to be more
efficient than searching a variable-width string as well, which may
recommend the occasional NVARCHAR2 column.

Justin Cave  <jcave@xxxxxxxxxxx>
Distributed Database Consulting, Inc.
http://www.ddbcinc.com

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Tim Gorman
Sent: Thursday, November 10, 2005 6:45 PM
To: oracle-l@xxxxxxxxxxxxx
Cc: mary.crystal@xxxxxxxxxxxx
Subject: Characterset question?

Is there any need or advantage (or danger or disadvantage) to making
NLS_CHARACTERSET the same as NLS_NCHAR_CHARACTERSET when dealing with
multi-byte charactersets or unicode charactersets?
--
//www.freelists.org/webpage/oracle-l


--
//www.freelists.org/webpage/oracle-l


Other related posts: