There could be a couple of answers which depend on how your table is organized. If you have an index on SSN then one of the things you might try is a simple loop like that: begin loop update sales set ssn=mod(ssn, 10000) where ssn > 9999 and rownum <= 1000; commit; exit when sql%rowcount < 1000; end loop; end; though that might be slow in certain cases (if your clustering factor for ssn isn't good, for example). You might try bulk collect/update a shot and see how something like this will perform for you: declare cursor l_cur is select rowid from sales where ssn > 9999; type l_tab is table of rowid index by binary_integer; l_buffer l_tab; begin open l_cur; loop fetch l_cur bulk collect into l_buffer limit 1000; forall i in 1 .. l_buffer.count update sales set ssn=mod(ssn,10000) where rowid=l_buffer(i); commit; exit when l_buffer.count < 1000; end loop; close l_cur; end; that should be faster compared to row-by-row stuff in your example. Another thing you could consider is dong CTAS and then swapping the tables. On Fri, Nov 13, 2009 at 2:50 PM, Roger Xu <wellmetus@xxxxxxxxx> wrote: > Hi List, > > Background: 9i (9.2.0.8.0) HP-UX B.11.31 U ia64 > > We need to shrink SSN to the last 4 digit. > > update sales set ssn=to_number(substr(to_char(ssn),-4)); > > But the sales table is quite large and we want to commit every 1000 records. > > Does anyone have a better idea to do this rather than below? > > Thanks, > > Roger > > > declare > cursor c12 is select ssn from sales for update of ssn; > n number := 0; > begin > for eachc12 in c12 loop > if eachc12.ssn is not null then > n := n + 1; > update sales set ssn=to_number(substr(to_char(ssn),-4)) where current > of c12; > if ( n = 1000 ) > then > commit; > end if; > end if; > end loop; > commit; > end; > -- Alex Fatkulin, http://afatkulin.blogspot.com http://www.linkedin.com/in/alexfatkulin -- //www.freelists.org/webpage/oracle-l