[yunqa.de] Re: max(rowid) instead of Count(*) for large databases?

  • From: Delphi Inspiration <delphi@xxxxxxxx>
  • To: yunqa@xxxxxxxxxxxxx
  • Date: Tue, 11 Dec 2012 23:23:41 +0100

On 11.12.2012 19:43, email@xxxxxxxxxx wrote:

> Speed-wise, does the trigger take db transactions in account? I mean, if
> I execute this:
> 
>     START TRANSACTION;
> 
>         INSERT INTO t VALUES (1);
>         INSERT INTO t VALUES (2);
>         INSERT INTO t VALUES (3);
>         INSERT INTO t VALUES (4);
> 
>         DELETE FROM t WHERE a = 1;
> 
>     COMMIT;
> 
> Does the trigger kicks in once when committing (taking in account all
> the changes), or is it execute 5 after each INSERT/DELETE?

5 times.

> In other words, how much will the trigger slow the INSERT / DELETE /
> UPDATE statements?

My measurement with the DISQLite3_Log_Inserts demo indicates that the
trigger slows down inserts by about 10 to 15 percent. Even with trigger
it achieves insertion rates of over 40000 records per second.

Your timings may vary as the slowdown depends on your schema and data.
Inserting big data (many columns, large blobs), the trigger's share is
smaller than for inserting tiny records. Best to conduct your own
benchmarks.

Alternatively, bulk-insert without trigger, then calculate the count
once, store it to the table, and finally activate the trigger for future
smaller inserts and deletes.

Ralf
_______________________________________________
Delphi Inspiration mailing list
yunqa@xxxxxxxxxxxxx
//www.freelists.org/list/yunqa



Other related posts: