Re: Oracle Development - Best Practice

  • From: Tim Gorman <tim@xxxxxxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Sat, 07 Feb 2004 22:12:45 -0700

The reason I asked for proof is because, when I saw the assertion, I didn¹t
like it.  So, I set about proving it with a quick little test.  Then, I
replied asking for proof, because I may have misunderstood the assertion and
missed the point.
So, here was my proof...

> SQL> create table x (a number, b number);
> 
> Table created.
> 
> SQL> set autotrace traceonly statistics
> SQL> 
> SQL> insert into x values (1,1);
> 
> 1 row created.
> 
> Statistics
> ----------------------------------------------------------
>           2  recursive calls
>           7  db block gets
>           2  consistent gets
>           0  physical reads
>         588  redo size
>         622  bytes sent via SQL*Net to client
>         526  bytes received via SQL*Net from client
>           3  SQL*Net roundtrips to/from client
>           2  sorts (memory)
>           0  sorts (disk)
>           1  rows processed
> 
> SQL> insert into x values (2,2);
> 
> 1 row created.
> 
> Statistics
> ----------------------------------------------------------
>           0  recursive calls
>           1  db block gets
>           1  consistent gets
>           0  physical reads
>         240  redo size
>         623  bytes sent via SQL*Net to client
>         526  bytes received via SQL*Net from client
>           3  SQL*Net roundtrips to/from client
>           1  sorts (memory)
>           0  sorts (disk)
>           1  rows processed
> 
> SQL> insert into x values (3,3);
> 
> 1 row created.
> 
> Statistics
> ----------------------------------------------------------
>           0  recursive calls
>           1  db block gets
>           1  consistent gets
>           0  physical reads
>         240  redo size
>         623  bytes sent via SQL*Net to client
>         526  bytes received via SQL*Net from client
>           3  SQL*Net roundtrips to/from client
>           1  sorts (memory)
>           0  sorts (disk)
>           1  rows processed

I highlighted the statistics for ³db block gets² and ³consistent gets²,
which sum to ³logical reads², in bold type.

OK, so after the initial row insert (which took two recursive calls, both on
data dictionary tables, and a total of nine logical reads), inserts into
this simple table take 2 logical reads apiece.  One ³db block get² and one
³consistent get².

So that¹s the baseline, executed without triggers.

Now, let¹s create a BEFORE INSERT trigger and re-try the test...

> SQL> create trigger xx
>   2     before insert on x for each row
>   3  begin
>   4     if :new.a > 0 then
>   5         :new.b := :new.b + 1;
>   6     end if;
>   7  end xx;
> SQL> /
> 
> Trigger created.
> 
> SQL> insert into x values (4,4);
> 
> 1 row created.
> 
> Statistics
> ----------------------------------------------------------
>           3  recursive calls
>           2  db block gets
>           8  consistent gets
>           0  physical reads
>         364  redo size
>         624  bytes sent via SQL*Net to client
>         526  bytes received via SQL*Net from client
>           3  SQL*Net roundtrips to/from client
>           2  sorts (memory)
>           0  sorts (disk)
>           1  rows processed
> 
> SQL> insert into x values (5,5);
> 
> 1 row created.
> 
> Statistics
> ----------------------------------------------------------
>           0  recursive calls
>           1  db block gets
>           1  consistent gets
>           0  physical reads
>         240  redo size
>         624  bytes sent via SQL*Net to client
>         526  bytes received via SQL*Net from client
>           3  SQL*Net roundtrips to/from client
>           1  sorts (memory)
>           0  sorts (disk)
>           1  rows processed
> 
> SQL>  insert into x values (6,6);
> 
> 1 row created.
> 
> Statistics
> ----------------------------------------------------------
>           0  recursive calls
>           1  db block gets
>           1  consistent gets
>           0  physical reads
>         240  redo size
>         625  bytes sent via SQL*Net to client
>         526  bytes received via SQL*Net from client
>           3  SQL*Net roundtrips to/from client
>           1  sorts (memory)
>           0  sorts (disk)
>           1  rows processed

Once again, the first INSERT statement (after the trigger was created)
resulted in a bunch of recursive calls and a total of 10 logical reads (i.e.
2 ³db block gets² and 8 ³consistent gets²).  As before, SQL tracing showed
these recursive calls to be queries on the data dictionary.

And, as before, subsequent INSERTs showed the same two logical reads and no
recursive calls per INSERT.  Also, please note that the ³redo size² for the
5th and 6th inserts are exactly the same as the 2nd and 3rd inserts, though
that¹s not unexpected.

The trigger did fire, as the last three inserts show the value in the B
column having been incremented:

> SQL> select * from x;
> 
>          A          B
> ---------- ----------
>          1          1
>          2          2
>          3          3
>          4          5
>          5          6
>          6          7

So, I don¹t see any evidence that there are additional logical I/O
associated with BEFORE INSERT triggers.

Database:  v9.2.0.1.0 EE.  Platform:  Apple iBook laptop running Mac OS X
v10.2.8.  Obviously, different versions may give different results.  For
example, back in v7.0.x, I believe (not positive) that triggers had to be
compiled with every execution, as if they were ³anonymous PL/SQL blocks²
instead of stored and compiled objects?  That definitely hurt...

Also, could we possibly be talking about statement-level BEFORE triggers,
instead of row-level?

So anyway, how about one more directive for developers and DBAs alike:

   12)  Don¹t believe everything that you read or are told.  Prove it.

Might be better to make that directive #1, no?

I think if we translated that to Latin, it would look like Cary¹s email
signature...  :-)




on 2/7/04 6:19 PM, Bobak, Mark at Mark.Bobak@xxxxxxxxxxxxxxx wrote:

> Tim,
> As far as his #8, I think he got that from Steve Adams.  Steve did some work
> a while ago that determined that, all else being equal, after triggers
> were cheaper than before triggers.  Unfortunately, I don't remember the
> details, and the cursory search I just did on IxOra didn't yield any hits.
> 
> As I recall, the difference was pretty subtle.  People need to keep in mind,
> when Steve says X is cheaper than Y, no doubt he's absolutely right, but
> sometimes, these are small, subtle differences.  I'm not sure I'd have
> extrapolate his results into a directive to avoid before triggers at all
> costs.
> 
> -Mark
> 
> 
> -----Original Message-----
> From:    Tim Gorman [mailto:tim@xxxxxxxxxxxxx]
> Sent:    Sat 2/7/2004 7:16 PM
> To:    oracle-l@xxxxxxxxxxxxx
> Cc:    
> Subject:    Re: Oracle Development - Best Practice
> 9) Use DBMS_APPLICATION_INFO procedures to "register" a running program by
> setting the MODULE and ACTION columns in the V$SESSION and V$SQL views.
> Provide a standard set of code fragments to developers to be executed
> immediately after the module entry point and before all module exit points
> (including error handlers).
> 
> 10) As part of allowing program modules to be cleared for production, review
> TKPROF reports from SQL traces from unit-testing.  Have one member of a team
> of 1-5 people who know how to read TKPROF reports approve or disapprove.
> Nothing goes to UAT or production without this approval...
> 
> 11) Peer code reviews should be performed frequently enough to keep "best
> practices" document up to date.  Peer code reviews should not only ensure
> that "best practices" are applied, but also that "not-so-good practices" are
> revised or removed and "better practices" are added...
> 
> ---
> 
> Some notes:
> 
>   - #1 Applies to every language, including SQL.  Just because a language
> supports implicit conversions doesn't mean you should ever use it.
> 
>   - #2 Doesn't make much sense in PL/SQL, because PL/SQL variables are so
> easy to use as bind variables -- there just doesn't seem to be any other way
> to do things.  The danger there is dynamic SQL, where people get carried
> away with building the SQL statement in a string and forget to use bind
> variables...
> 
>   - #3 What exactly is meant by "whereever required"?  Kinda vague.
> Better to say something like "use BULK operations for bulk operations" or
> say nothing at all.  People can get into a lot of trouble trying to push a
> square peg into a round hole unnecessarily...
> 
>   - #4 "Write code that fits into your screen size"?  Assuming the number
> of people who develop from a Blackberry are as rare as those who use 35"
> monitors, this can still lead to unreadable and unmaintainable code.  I
> suspect that this would fall out after the first code review or so.  Just as
> with writing tax policy, strange directives can lead to strange results...
> 
>   - #5 For one thing, the OTHERS exception must be the last handler
> specified, so it's always "the last resort" anyway.  And there's nothing
> wrong with using declared exceptions (i.e. NO_DATA_FOUND, etc) and defining
> your own where pre-defined ones don't exist (i.e. PRAGMA EXCEPTION_INIT),
> instead of building up a big IF SQLCODE ... THEN ... ELSIF ... structure...
> 
>   - #7 "No code change necessary when schema changes"???  Doubt it!!!  How
> about "fewer code changes necessary"...
> 
>   - #8 That's a new one by me.  Can you prove it?
> 
> Hope this helps...
> 
> -Tim
> 
> 
> on 2/7/04 4:35 PM, jaysingh1@xxxxxxxxxxxxx at jaysingh1@xxxxxxxxxxxxx wrote:
> 
>> Dear All,
>> 
>> We are starting a new oracle development project and my boss wants me to
>> prepare "Oracle Development- Best practice" document/presentation kind of
>> stuff.  Basically this is to avoid common mistakes during the development
>> cycle.
>> 
>> I have few points..
>> 
>> For example,
>> 
>> 1) While writing pl/sql, use the correct datatype so that implicit conversion
>> will be avoided
>> 
>> 2) Use bind variable to avoid unnecessary parsing
>> 
>> 3) Use BULK COLLECT, % ATTRIBUTES wherever required
>> 
>> 4) MODULARITY
>> Write the code that fit into your screen size.
>> Through successive refinement, you can reduce a complex problem to a
>> set of simple problems that have easy-to-implement solutions.
>> 
>> 5) EXCEPTION WHEN OTHERS is almost always a BUG unless it is immediately
>> followed by a 
>> RAISE.Use WHEN OTHERS exception as the last resort and handle exceptions.
>> 
>> For Example,
>> EXCEPTION
>> WHEN OTHERS THEN
>>     if (sqlcode=-54) then
>>           .... deal with it.
>>     else
>>           RAISE;
>>     end if;
>> 
>> 
>> 6) Tom's Mantra
>> 
>> If (possible in SQL)
>> do it;
>> else if(possible in PL/SQL)
>> do it;
>> else if(possible in JAVA)
>> do it;
>> else
>> ..
>> ..
>> end if;
>> 
>> 7)% ATTRIBUTES
>> 
>> Use %TYPE and %ROWTYPE attributes. No code change is required when schema
>> structure changes.
>> 
>> 8) BEFORE VS AFTER TRIGGER
>> 
>> NEVER USE BEFORE TRIGGER FOR VALIDATIONS. Use BEFORE triggers ONLY to
>> modify :NEW value.
>> 
>> AFTER row triggers are slightly more efficient than BEFORE
>> row triggers. With BEFORE row triggers, affected data blocks must
>> be read (logical read, not physical read) once for the trigger and
>> then again for the triggering statement.
>> Alternatively, with AFTER row triggers, the data blocks must be
>> read only once for both the triggering statement and the trigger.
>> 
>> These are only few points w.r.t oracle developers. I like to get more info
>> from you.
>> 
>> Your help would be really appreciated.
>> 
>> Thanks
>> Jay

----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

Other related posts: