Re: Data Migration options - oracle to oracle

  • From: Gints Plivna <gints.plivna@xxxxxxxxx>
  • To: keyantech@xxxxxxxxx
  • Date: Fri, 6 Sep 2013 11:30:00 +0300

2013/9/6 Karth Panchan <keyantech@xxxxxxxxx>
> 50% of table data to target database need transformation. Like 1 source
> table insert into multiple target tables (or) Multiple source tables to 1
> target table.
>

So I'd say acually you need separate project called let's say "customer
data migration", and you need custom code for that. Data pump/databse
link/old imp-exp/goldenGate/ whatever other data movement mechanism is only
a small part of the problem. If you have hundreds of tables and serious
business logic, I cannot imagine how automated tool can make some automated
data movemen from one tables to many and consolidate from many tables to
one. That sounds like artificial intelligence, which is not very developed
at least now :)



> Most of the target tables have Triggers enabled, that need to capture
> migration data for audit.
>

Actually you don't. If the functionality of audit is known, you can easily
simulate it using some procs/scripts. If the trigger simply copies each row
to another audit table, you don't need to do that on the row by row basis.
You can write a simple insert that AFTER the current business data load
simply copies ALL new data to audit tables all together. It would be muuuch
faster. Of course you have to disable application while you are loading
business data.

>
> Given that we need to migrate every month after we go-live, there will be
> existing customers who will be using this application tables. So, we will
> not able to disable constraints/indexes/triggers on Target database.
>

I'd say there are more questions:
1) what will be after the customer is migrated - only new data source will
be used? If no how you will migrate changes made in the source?
2) Do you have some common info, for example, classifiers? What if someone
changes them in source AFTER you have started overall migration process?
I.e. before each migration stage most probably you need to synchronize them.
3) Adding new data is wayyyy muuuch easier in empty schema than schema that
already contain data. If it already contain data, you have many problems
like:
- identifiers generated from sequences are overlapping
- natural identifiers for the same objects are different e.g. "system a"
has Jhon Smith with SSN 545454564, but "system b" has John Smyth with SSN
545454564. Are they the same?
- identifiers are the same, but the meaning is different e.g. London, but
one in KY, the other in OH, and noone in UK :)

>
> If we had all tables partitioned by customer, this would have been easier
> to load staging table and exchange partition. Unfortunately, I was told
> that option is maintenance nightmare.
>
> This anyway doesn't sound usable considering "50% of table data to target
database need transformation [..]"

Bottomline, we need to migrate only data from source to target db without
> impacting production system with performance to migrate 100 customers
> data(450 tables for each customer) in 24 hours.
>
> Umm, people and management always want something. Question is - are their
requirements real? Even if they are real, how much it costs? It could be
that your management cannot imagine what kind of problems actually
migration projects have. So either you or some other person have to collect
all of them and provide info why these are serious problems. Even better
(for most management types) for each problem provide at least one solution,
telling them how it could be solved. If you have more than one solution you
can compare them.

Some years ago I wrote a paper about all that here
http://www.gplivna.eu/papers/legacy_app_migration.htm
Since then I participated in a pair of other projects involving data
migration and we used more or less the same approach.

Gints


--
//www.freelists.org/webpage/oracle-l


Other related posts: