Re: Deletion from large table

  • From: Tim Gorman <tim@xxxxxxxxx>
  • To: dbakevlar@xxxxxxxxx, JDunn@xxxxxxxxx
  • Date: Tue, 23 Aug 2016 09:16:38 -0600

John,

Beaten to the punch by my brilliant spouse...

Augmenting her insightful suggestion, if you can partition it in such a way that the rows to be deleted are isolated into separate partitions away from the rows being inserted/updated? Queries aren't a concern, but non-isolated DML would be a problem.

An example would be a DATE column, assuming that rows being deleted are older than a certain time, while rows being updated are less than a certain time.

If so, then you can employ a parallel CREATE TABLE ... AS SELECT or INSERT /*+ APPEND */ SELECT to extract rows rows you wish to keep from the partition which you are currently performing deletions. This will result in another standalone table which you can now exchange with the original partition. An INSERT operation is always far faster than any UPDATE or DELETE operation, especially when performed in bulk using APPEND (i.e. direct-path load).

Other benefits are several...

 * resulting partition is densely-populated and compacted, as opposed
   to sparsely-populated after a mass deletion, causing subsequent
   operations to be faster, especially full-table scans
     o row density can be even greater if compression is employed
       during direct-path INSERT or CTAS
 * if the resulting partition is never going to be have DML performed
   on it again, it can be relocated to a tablespace to be set READ ONLY


I've presentations, white-papers <http://evdbt.com/papers/>, SQL and PL/SQL scripts <http://evdbt.com/scripts/>, and videos <http://evdbt.com/videos/> on these operations on my website, if this helps?

 * Presentation "Scaling To Infinity: Partitioning Data Warehouses on
   Oracle Database
   
<http://evdbt.com/download/presentation-scaling-to-infinity-partitioning-data-warehouses-on-oracle-database/>"
 * White paper "Scaling To Infinity: Partitioning Data Warehouses on
   Oracle Database
   
<http://evdbt.com/download/paper-scaling-to-infinity-partitioning-data-warehouses-on-oracle-database-2/>"
 * PL/SQL procedure EXCHPART <http://evdbt.com/scripts/>
 * Video "The Fastest UPDATE is an INSERT
   <http://www.youtube.com/watch?v=pvbTAgq_BBY>"


Above all, please remember these two adages...

 * The fastest operation is one you never do
 * /Patient/:  Doctor!  Doctor!  It hurts when I do this! /(waves arms
   comically)/ /Doctor/:  Then don't do that!


Hope this helps...

-Tim



On 8/23/16 08:53, Kellyn Pot'Vin-Gorman wrote:

Have you considered partitioning this table and can it “possibly” be partitioned in a way that would isolate these rows?
Just a wild thought….
Kellyn
On Aug 23, 2016, at 7:39 AM, John Dunn <JDunn@xxxxxxxxx <mailto:JDunn@xxxxxxxxx>> wrote:

Unfortunately it’s a nightly thing….whilst updates are still going on….
*John*
*From:*Chris Taylor [mailto:christopherdtaylor1994@xxxxxxxxx]
*Sent:*23 August 2016 14:38
*To:*John Dunn
*Cc:*oracle-l@xxxxxxxxxxxxx <mailto:oracle-l@xxxxxxxxxxxxx>
*Subject:*Re: Deletion from large table
Is this a one time thing, or a regularly occurring thing? (A one time data cleanup versus a nightly routine)
If it's a one time data cleanup (or rarely needed), I'd recommend saving off the rows you want to keep into another table, truncate the big_table and reload the rows from the temporary table you created to save the rows you wanted.
Delete is one of the (if not THE) single most expensive operation you can run in a database (but I'm sure you're aware of that but wanted to mention it).
Chris
On Tue, Aug 23, 2016 at 5:17 AM, John Dunn <JDunn@xxxxxxxxx <mailto:JDunn@xxxxxxxxx>> wrote:
I need to delete large numbers of rows from a large table based upon whether a record exists in a small table.
I am currently using :
delete from big_table where not exists (select 1 from small_table s wheres.id <http://s.id/>=b.id <http://b.id/>)"
big_table may have up to 100,000 rows for the same id value.
small_table will only have one row per id value
Is there a better way to code this?
*John*


Other related posts: