Jack you are right, its probably going to take much more than that. I assume it is because of the null check at the end which invalidates the index. I am thinking something like this might work.. merge into mytable j using (SELECT i.mycolumn,i.joincolumn FROM mytable i,mytable k WHERE i.othercol = 'someval' and i.joincolumn=k.joincolumn and k.mycolumn is null ) inner on (inner.joincolumn=j.joincolumn) when matched then update set j.mycolumn=inner.mycolumn On Tue, Feb 25, 2014 at 11:02 PM, Jack van Zanen <jack@xxxxxxxxxxxx> wrote: > Hi > > It is doing 225k index range scans and table lookups even at 1/10th of a > second per lookup that will take around 6 hours to complete. > If my maths are correct > > Jack > > > > Jack van Zanen > > ------------------------- > This e-mail and any attachments may contain confidential material for the > sole use of the intended recipient. If you are not the intended recipient, > please be aware that any disclosure, copying, distribution or use of this > e-mail or any attachment is prohibited. If you have received this e-mail in > error, please contact the sender and delete all copies. > Thank you for your cooperation > > > On Wed, Feb 26, 2014 at 3:00 PM, Praveen Ramanan <pramanan@xxxxxxxxx>wrote: > >> Hello All, >> >> I am using oracle 11gr2 on windows server 2008. >> This is a test system.Stats are current and archivelog is turned off >> temporarily. >> >> I am trying to update a table that has over 1.1 million rows. >> I have a simple update statement that does the following. (basically >> updates mycolumn when it is null, to somevalue) >> >> update mytable a set a.mycolumn = (select b.mycolumn from >> mytable >> b where >> >> a.joincolumn = b.joincolumn and >> >> b.anothercol='someval') >> where a.mycolumn is null; >> >> The number of rows where mycolumn is null is around 1/5th of the total. >> (225000 rows) >> >> The above update statement seems to be running forever.On checking the >> explain plan this is what i see. >> >> >> >> >> --------------------------------------------------------------------------------------------------------- >> | Id | Operation | Name | Rows | >> Bytes | Cost (%CPU)| Time | >> >> --------------------------------------------------------------------------------------------------------- >> | 0 | UPDATE STATEMENT | | >> 224K| 2630K| 6454M (1)|127:19:43 | >> | 1 | UPDATE | mytable | | >> | | | >> |* 2 | TABLE ACCESS FULL |mytable | 224K| >> 2630K| 6644 (1)| 00:00:01 | >> |* 3 | TABLE ACCESS BY INDEX ROWID| mytable | 1 | 21 >> | 28752 (1)| 00:00:03 | >> |* 4 | INDEX RANGE SCAN | mytable_someval_INDEX | >> 874K| | 3318 (1)| 00:00:01 | >> >> --------------------------------------------------------------------------------------------------------- >> >> Predicate Information (identified by operation id): >> --------------------------------------------------- >> >> 2 - filter("a"."mycolumn" IS NULL) >> 3 - filter("b"."joincolumn"=:B1) >> 4 - access("b"."anothercol"='somecol') >> >> >> I know i can use CTAS to achieve my task.However i am not sure why >> updating a .225 million rows would take hours. >> >> Can my update statement be written more efficiently? (perhaps using merge >> update?) >> >> Maybe there is some other bottleneck that I need to look? >> >> Any thoughts would be very well appreciated. Thanks!!! >> >> >> >> >> >> >> -- >> *Thousands of candles can be lighted from a single candle, and the life >> of the single candle will not be shortened. Happiness never decreases by >> being shared.* >> > > -- *Thousands of candles can be lighted from a single candle, and the life of the single candle will not be shortened. Happiness never decreases by being shared.*