Re: Migrating 9i to 10g performance issues

  • From: "Howard Latham" <howard.latham@xxxxxxxxx>
  • To: sbecker6925@xxxxxxxxx
  • Date: Tue, 25 Mar 2008 12:19:55 +0000

For our 700gig database I used transportable tablespaces / datapump from
Windows nt to redhat.
Worked well once I got the strategy right.
In my experience get a spare box for a few practices.



On 25/03/2008, Sandra Becker <sbecker6925@xxxxxxxxx> wrote:
>
> Source DB:  Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little
> endian
> Target DB:   IBM VM on series z9, SLES10, 4G RAM, Oracle 10.2.0.3 EE, big
> endian
> Database is just shy of 1 terabyte in size--70 percent of data is in a
> single table; total of 212 tables.
> 12 very large tables, including the monster--most have no column, like a
> static date, that can be used to logically break the data into "partitions".
>
> I'm working on procedures to move our production database from the Dell to
> the IBM.  My tests so far indicate that I'm going to need more than 24
> hours.  Management, in their infinite wisdom, is insisting that it be done
> in less than 8.  It will take as long as it takes, but I'm wondering what I
> can do to speed things up.  So far I've done the following:
>
> 1)  exp/imp - too slow overall
> 2)  plsql that commits at regular intervals, depending on the size of the
> table - works very well for all tables under 1M; can load 111 tables in
> under 2 hours using 2 concurrent sessions.  Works for larger tables, but
> obviously takes much longer.  I had 2 sessions doing tables under 1M and 2
> doing tables between 1M and 100M concurrently.  Didn't try for the 12 tables
> over 100M.
> 3)  Direct-path insert - used on the table holding 70 percent of the
> data.  Four months ago I insisted this table have a static date column
> added.  I can logically break the data loads down by date--they want the
> most current data loaded first, the remainder can be done over a period of
> days.  This is working reasonably well, but having done this same thing once
> before on this table, I know it will take about a month to get all the data
> moved based on the constraints I'm working under--can't be done during core
> business hours, etc.
> 4)  I put the target database in noarchivelog mode for my testing.  Is
> this a wise move for migrating production during the go live?
>
> Manage has suggested that I leave off old data and load it later.  Doesn't
> work with 95 pecent of the tables because of their structure and foreign key
> constraints.  They also suggested I use both the primary and the standby
> databases to read from.  No way to test this until I go live--constraints
> again--although this actually was part of my plan from the beginning.  Will
> too many concurrent sessions loading data slow things down too much?  What
> would I look at to determine this?  10g is new to me so I'm not familiar
> with all the features yet and may be missing something significant.
>
> Any suggestions are appreciated, other than telling management what they
> can do with their time constraint.  (Already did that.)
>
> Thanks.
>
> Sandy
>



-- 
Howard A. Latham

Other related posts: