Re: Sincere Advice on Sql Plan - Thanks -Please Help in Understanding

Methinks that shared memory reloads are a direct consequence of having 
zillions of hard-coded queries executing only once. There is no need to 
add anything to the shared pool before the other question is fixed (by 
the way, you could add whatever you want without changing much to any 
Therefore, the really first thing to do is :
1) Change the code and use bind variables  (the original poster may be 
interested by a short paper on the topic,
2) If this option is not possible (source code unavailable, politics 
...) plan B is to use cursor_sharing = force in the init.ora file, with 
a number  of warnings about side effects (bugs in a number of versions, 
and possible problem with the use of histograms and star schemas).

Totally forget about the shared pool as long as statistics show 
excessive parsing. You just have one statement chasing the other.

Also, from the execution plan (snipped with a lot of stuff to avoid 
having quotes being more than my own input) I suspect that indexing 
l_translations on (l_type_gr_id, geo_id) might help. Generally speaking 
I like to keep indexing light but the name of the table leads me to 
think that it isn't heavily updated.


S Faroult

Juan Carlos Reyes Pacheco wrote:

>Hi Sanjay if you see 2419 neither 291 are not binded, 
>sendme your full trace file.
>The first question, can't you try to bind that variables or you have
>histograms in that columns.
> Please showme your library cache hit ratio to see if your shared pool is
>not too small
>The following query
>Shows you the amount of reloads needed from the library because there were
>not enough memory if the ratio is more that 1% you can think in add the
>shared pool size.
>Juan Carlos Reyes Pacheco
>-------Original Message-------
>From: Sanjay Mishra
>Date: 10/12/04 17:22:53
>To: Juan Carlos Reyes Pacheco; mrothouse@xxxxxxxxx; oracle-l@xxxxxxxxxxxxx
>Subject: Re: Sincere Advice on Sql Plan - Thanks -Please Help in
>There are several query like this and I am giving one of the recent trace.
>First is the TKPROF output and second is Trace Data
>SELECT DECODE(lt.t_value, NULL, gr.r_value,
>lt.t_value) description, TO_CHAR( id
>g_regions gr, l_translations lt WHERE sub_type = 2419 AND
>lt.geo_id (+) = AND lt.l_type_gr_id (+) = 291 ORDER BY
>call count cpu elapsed disk query current rows
>------- ------ -------- ---------- ---------- ---------- ----------
>Parse 10 0.01 0.00 0 0 0 0
>Execute 5 0.00 0.00 0 0 0 0
>Fetch 60 0.29 0.31 0 1240 0 1155
>------- ------ -------- ---------- ---------- ---------- ----------
>total 75 0.30 0.32 0 1240 0 1155


Other related posts: