Re: 10gR2 RMan Transport Tablespace

  • From: japplewhite@xxxxxxxxxxxxx
  • To: knecht.stefan@xxxxxxxxx
  • Date: Wed, 28 Jan 2009 10:45:42 -0600

Good suggestion, Stefan, except that we have numerous schemas for our 
home-grown Apps in the database.  We can't just overwrite the Dev and Test 
databases with a duplicate of Prod.  It would be almost as much trouble to 
preserve and replace the App schemas (some of which are large in their own 
right) in Dev and Test after a refresh as to just transport the SIS data.

We've gotten a few of the Apps out of the SIS database, but accessing all 
the SIS tables from another DB via DB Links has proven problematic, 
especially since a lot of the SIS tables use LONGs.

Jack C. Applewhite - Database Administrator
Austin I.S.D. - MIS Department
512.414.9715 (wk)  /  512.935.5929 (pager)




Stefan Knecht <knecht.stefan@xxxxxxxxx>
 
Sent by: oracle-l-bounce@xxxxxxxxxxxxx
01/28/2009 10:23 AM
Please respond to
knecht.stefan@xxxxxxxxx


To
japplewhite@xxxxxxxxxxxxx
cc
mfontana@xxxxxxxxxxx, oracle-l@xxxxxxxxxxxxx
Subject
Re: 10gR2 RMan Transport Tablespace






For something like this I would not worry about the metadata / # of 
objects at all, and simply go with duplicate database RMAN offers.

Cheers

Stefan


On Wed, Jan 28, 2009 at 5:10 PM, Michael Fontana <mfontana@xxxxxxxxxxx> 
wrote:
Congratulations on doing your best to overcome what is an almost 
unimaginably bad design.  However, I am sure many on this list have had to 
cut their teeth on one if not many of these.  Some major applications 
(even those owned by Oracle) are also often needlessly complicated and 
make poor use of Oracle tablespaces, each in their own way (think SAP, 
Peoplesoft, Siebel, etc). 
 
The number of tables and indices you have make this an absolutely perfect 
situation for the use of transportable tablespaces and RMAN.  It should be 
fairly straightforward and easy to script a refresh operation, but keep in 
mind you will have to use one of the export/import methods (classic 
export/import or datapump) to migrate metadata.  Since this is a newer 
operation and your metadata will probably be quite large, it might be 
worth it to use the newer datapump features including multiple processes 
during migration over the LAN.  I have used this quite awhile with no 
pitfalls except an occasional network glitch or unplanned outage.  Still, 
using this method should DECREASE the amount of data you are migrating and 
mitigate such issues.
 
On a final note – are you saying that even the indices are in the same 
tablespace?  While there are not as many reasons to so as there once was, 
I would consider rebuilding them in your production system and placing 
them in a separate tablespace as a manageability issue.  It would make 
operations such as this much easier to accomplish.  You might even 
consider putting larger indices in a separate tablespace from smaller ones 
for space efficiency.  This should be fairly easy to accomplish with a 
phased approach over time, but of course would probably have to be 
scheduled during an outage window.  Usually applications such as this make 
no reference to the tablespace where an index is located, but I have seen 
some that routinely drop and/or rebuild indices within application 
processes, so watch for that.
 
 

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] 
On Behalf Of japplewhite@xxxxxxxxxxxxx
Sent: Wednesday, January 28, 2009 9:56 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: 10gR2 RMan Transport Tablespace
 

Environment: 10.2.0.4 Enterprise DBs on RHEL 4.0 servers 

We use a 3rd party Student Information System (SIS) that (mis)uses an 
Oracle database to store over 176,000 tables and 275,000 indexes.  Yes, 
it's a horrible design, but we've been stuck with it for years and have 
figured out how to manage nicely in spite of it. 

We refresh our Dev and Test databases weekly with the whole set of tables 
and indexes so our Developers have fresh meat to gnaw.  For years we've 
done this via Export / Drop / Import - multiple jobs in parallel - because 
Prod and Dev / Test were on different OS's.  Now we've finally got them 
all on the same OS and at the same 10gR2 level - was 9iR2 up through 
December. 

Exp / Drop / Imp works OK, but takes all day Sunday.  We've been 
anticipating that Transportable Tablespaces would be the ticket for us, 
since the SIS stores all tables and indexes in a single tablespace.  Plus, 
the SIS uses nothing but the plainest of vanilla datatypes and no fancy 
database features that would preclude us from using TT.  I've perused the 
RMan docs on Transport Tablespace and it looks like we could do it. 

I do wonder if there are pitfalls for us, having over 450,000 segments in 
that single 88GB tablespace (11 x 8GB datafiles).  Anyone have any 
positive or negative experience with something like this? 

BTW, I've also considered using Data Pump Import on the target DBs over DB 
Links back to Prod, but feel that would use such a huge amount of LAN / 
WAN bandwidth - especially since it'd have to be done for both Dev and 
Test - that impdp is not feasible for us.  Any comments about that 
approach? 

Thanks. 

Jack C. Applewhite - Database Administrator
Austin I.S.D. - MIS Department
512.414.9715 (wk)  /  512.935.5929 (pager)


Other related posts: