Environment: 10.2.0.4 Enterprise DBs on RHEL 4.0 servers We use a 3rd party Student Information System (SIS) that (mis)uses an Oracle database to store over 176,000 tables and 275,000 indexes. Yes, it's a horrible design, but we've been stuck with it for years and have figured out how to manage nicely in spite of it. We refresh our Dev and Test databases weekly with the whole set of tables and indexes so our Developers have fresh meat to gnaw. For years we've done this via Export / Drop / Import - multiple jobs in parallel - because Prod and Dev / Test were on different OS's. Now we've finally got them all on the same OS and at the same 10gR2 level - was 9iR2 up through December. Exp / Drop / Imp works OK, but takes all day Sunday. We've been anticipating that Transportable Tablespaces would be the ticket for us, since the SIS stores all tables and indexes in a single tablespace. Plus, the SIS uses nothing but the plainest of vanilla datatypes and no fancy database features that would preclude us from using TT. I've perused the RMan docs on Transport Tablespace and it looks like we could do it. I do wonder if there are pitfalls for us, having over 450,000 segments in that single 88GB tablespace (11 x 8GB datafiles). Anyone have any positive or negative experience with something like this? BTW, I've also considered using Data Pump Import on the target DBs over DB Links back to Prod, but feel that would use such a huge amount of LAN / WAN bandwidth - especially since it'd have to be done for both Dev and Test - that impdp is not feasible for us. Any comments about that approach? Thanks. Jack C. Applewhite - Database Administrator Austin I.S.D. - MIS Department 512.414.9715 (wk) / 512.935.5929 (pager)