RE: Interpreting csscan results

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <don@xxxxxxxxx>, "'oracle_l'" <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 9 May 2007 12:24:41 -0400

I'm pretty sure you want to fix those things BEFORE using CSALTER.

Let's suppose that there is a lossy character in a column name. That allows
the logical possibility of identical column names after the conversion.

Now thirty lossy varchar2, might be pretty easy to clean up "by the rules"
if they are synonym names or view column definitions that you can easily fix
by dropping the current ones and replacing them with convertible names.

IF NOT, but I'm not sure where they would appear in the dictionary that you
cannot repair them with supported commands, you might have to update the
values "by hand" via sqlplus. DEFINITELY a test copy exercise.

As for the application data, unless you're dealing with vendor package rules
you should be able save the rows for audit purposes and so linguists can
second guess your repair choices later and then fix them before using
CSALTER.

In my opinion it is far better to use CSALTER only after obtaining a
completely clean report with the original values preserved than it is to
risk anything.

In the event the existing values are legitimate in the new character set but
shouldn't exist in the current character set (how the heck could that
happen, you ask, and I don't know but it does) but do exist in the
destination character set, so they are only lossy in that there is no
mapping, saving the rows in some form where you can selectively slap them
back over the top of the "vanillanized" non-lossy values might in some cases
be the correct information result.

Good luck. And then of course you'll find out you really need to go to some
UTF variant, but that is another adventure.

Regards,

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Don Seiler
Sent: Wednesday, May 09, 2007 11:17 AM
To: oracle_l
Subject: Interpreting csscan results

Morning all.  Our production database (10.2.0.2 on RHEL3) is USASCII7,
and we've recently had issues with western euro characters being used.
 I'm researching the idea of migrating to WE8ISO8859P1.

So last night I ran csscan with those from/to parameters, and I'm more
than a little confused.  My reading led me to believe that since
WE8ISO8859P1 is a complete superset of USASCII7, there should be no
issues.  Perhaps I'm just interpreting the results wrong.

The scan summary says:
Some character type data in the data dictionary are not convertible to
the new character set
Some character type application data are not convertible to the new
character set

Here is the summary for each data dictionary and app data:

[Data Dictionary Conversion Summary]

Datatype                    Changeless      Convertible
Truncation            Lossy
--------------------- ---------------- ----------------
---------------- ----------------
VARCHAR2                    17,344,573                0
0               30
CHAR                             1,216                0
0                0
LONG                           916,150                0
0                0
CLOB                         1,505,729                0
0                0
VARRAY                          17,408                0
0                0
--------------------- ---------------- ----------------
---------------- ----------------
Total                       19,785,076                0
0               30
Total in percentage            100.000%           0.000%
0.000%           0.000%

The data dictionary can not be safely migrated using the CSALTER script

[Application Data Conversion Summary]

Datatype                    Changeless      Convertible
Truncation            Lossy
--------------------- ---------------- ----------------
---------------- ----------------
VARCHAR2                49,180,912,298                0
0           11,819
CHAR                    10,295,777,604                0
0              736
LONG                             4,957                0
0                0
CLOB                                 1                0
0                0
VARRAY                               0                0
0                0
--------------------- ---------------- ----------------
---------------- ----------------
Total                   59,476,694,860                0
0           12,555
Total in percentage            100.000%           0.000%
0.000%           0.000%

The scan.err file lists those names with mangled WE characters with a
"lossy conversion" exception.  I'm wondering if this (in particular
the data dictionary warning) means my database will be horribly
corrupted if I try CSALTER, or if it just means I will have to correct
those particular fields afterward.

-- 
Don Seiler
oracle blog: http://ora.seiler.us
ultimate: http://www.mufc.us
--
//www.freelists.org/webpage/oracle-l




--
//www.freelists.org/webpage/oracle-l


Other related posts: