RE: Update of Clobs *Performance*

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: "Anthony Molinaro" <amolinaro@xxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 3 Nov 2004 15:16:44 -0500

Putting in a counter, picking a reasonable size, commit'n and reset'n the
counter when you hit the limit is usually useful.

Even if a single commit for the whole table will work for him now, there is
a good chance it will blow up later as the tables grow. It's pretty likely
committing each row is unreasonable, but committing monoliths is a recipe
for future problems and driving UNDO out of cache without need. I recommend
avoiding monolithic commits unless there is a hard requirement for
reversibility (rollback) and avoiding a program architecture that drives a
need for monolithic commits is up there with the golden mean as far as I'm
concerned.

mwf

-----Original Message-----
From: Anthony Molinaro [mailto:amolinaro@xxxxxxxx]
Sent: Wednesday, November 03, 2004 3:05 PM
To: mwf@xxxxxxxx; oracle-l@xxxxxxxxxxxxx
Subject: RE: Update of Clobs *Performance*


In regard to: >>>>>>>> Even better, just commit once at the end...

-----Original Message-----
From: Mark W. Farnham [mailto:mwf@xxxxxxxx]
Sent: Wednesday, November 03, 2004 3:00 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: RE: Update of Clobs *Performance*


create index why_full_scan_all_my_clobs_for_each_one_row_update on
tableB(tabB_num)

change your where clause to where tabB_num = to_number(v_id)

Think about a commit counter within the loop less than the entire table.
Maybe 1000 or 10000?

Regards,

mwf


<snip>

Procedure
declare
v_clob varchar2(32500);
v_id varchar(10);

cursor cont_rep_clob is
select tabA_char, tabA_clob
from Table_A;

begin
open cont_rep_clob;
loop
fetch cont_rep_clob into v_id, v_clob;

exit when cont_rep_clob%NOTFOUND;

update Table_B
set tabB_clob = v_clob
where to_char(tabB_num) = v_id;

commit;

end loop;
close cont_rep_clob;


--
//www.freelists.org/webpage/oracle-l

Other related posts: