dont commit in a loop - ora-1555 will be coming your way. export / import cant deal with table tames. I'd do it in plain old sql, drop any indexes on the new table, insert /*+APPEND*/ into new_table from schema1.old tables where <put you where clause to select 10 days worth of data for example, the more the better) once all data has been loaded, build indexes in parallel On 10/19/05, Knight, Jon <jknight@xxxxxxxxxxxxxx> wrote: > > I have a couple of large tables (average 350,000 recs/day) that I need to > move to a different schema. We have almost 1 year of data. They are > currently traditional table structures and I'm moving them to partitioned > tables. The table names are different between schemas. > > We have already created the partitioned tables. I'm wondering if > export/import will do the trick. Will the records go into the correct > partitions? What about the different table names? I had planned to use > pl/sql, but thought I'd ask for other suggestions first. If you don't have > an alternative suggestion, could you please comment on whether there is a > more efficient(quicker) pl/sql approach than what I have below. > Particularly, I'm interested in avoiding rollback errors. > > We're still on Oracle 8i, Sun Solaris. > > Many thanks, > Jon Knight > > declare > cursor c1 is > select ... > from schema1.old_table_name > ; > v1 c1%rowtype; > begin > open c1; > loop > fetch c1 into v1; > exit when c1%notfound; > > insert into schema2.new_table_name (...) > values (...); > commit; > end loop; > close c1; > end; > / > > > -- > //www.freelists.org/webpage/oracle-l >