Re: Massive Update commit every 1000 records

  • From: Yechiel Adar <adar666@xxxxxxxxxxxx>
  • Date: Tue, 22 Dec 2009 14:03:40 +0200

I thought I remembered a problem with %notfound.
See Steven Feuerstein article in http://www.oracle.com/technology/oramag/oracle/08-mar/o28plsql.html. You might escape the problem because you check the %notfound AFTER processing the rows.

Adar Yechiel
Rechovot, Israel



Roger Xu wrote:
First of all, thanks for everyone's reply and sorry for my mistake not resetting counter back to zero after commit. We do have an index on SSN but I want to drop it to speed up the update. It looks to me that a better way is either CTAS or using bulk collect (below). Considering the standby database in place, I think I will go with bulk collect to reduce the redo transport. Thanks, Roger Xu

On Fri, Nov 13, 2009 at 2:17 PM, Alisher Yuldashev <yuldashev@xxxxxxxxxxx <mailto:yuldashev@xxxxxxxxxxx>> wrote:

    Roger,

    I would do

    declare
     cursor c12 is select rowid rid from sales where ssn is not null;
     type v_typ is table of c12%rowtype;
     v_arr v_typ;
    begin
     open c12;
     loop
      fetch c12 bulk collect into v_arr limit 1000;
      forall i in 1..v_arr.count
        update sales set ssn=to_number(substr(to_char(ssn),-4)) where
    rowid=c12.rid(i);
     commit;
      exit when c12%notfound;
     end loop;
     close c12;
    end;

    Thanks,

-- Alisher Yuldashev
    Senior Oracle DBA
    The Pythian Group - Ottawa, Canada
    Web  : http://www.pythian.com <http://www.pythian.com/>


Other related posts: