Since you mention about LONG columns and you seem to be in 9i/8i, i think the fastest method would be exp with direct=y and import into the partitioned table with commit=y and buffer=5mb (check it for your env). It may not be possible to avoid downtime. You have not mentioned the table size. If if it possible for you to know which row got inserted/modified/deleted during the day, i would recommend create a partitioned table, insert rows into that till yesterday, and take minimal maintenance , insert todays data (or any updates/deletions) to the new partitioned table and rename the old; The indexes could be pre-created on the new table. Child tables if any would need to be made to re-point to the new (parent) partitioned table. That way you will have minimal downtime. In case of rows with LONG, you might need to delete and bring the full row. HTH GovindanK On Mon, 27 Aug 2007 08:44:39 -0500, "Suresh Chaganti" <chaganti.suresh@gm ail.com> said: Hello Can any one point me to a doc on partitioning a non-partitioned table. I couldnt find one on metalink I know we can use DBMS_REDEFINITION package startin 9i. But this has several restriction including inability to use materialized views, LONG columns etc..Also I have some 8i databases as well. Even if I need to take the table offline, it is ok, but I am looking for definitive guidance on the methodology to ensure all the indexes, constraints will be intact after re-org. Thank you Suresh