RE: Performance challenge.

  • From: "Tony Aponte" <Tony_Aponte@xxxxxxxxx>
  • To: <WLJohnson@xxxxxxxxxxxxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 17 Apr 2007 13:52:28 -0400

Analyzing an object invalidates parsed execution plans that use or could
use the object.  Also, look up Bind Variable Peeking.  The behavior you
describe indicates that the optimizer is peeking at the bind variable
values on the very next parse operation for a particular SQL statement.

 

Just a hunch, do you see tricky SQL that tries to work for multiple
scenarios (i.e. WHERE tab.col1 = NVL(:bind1,tab.col1) or something like
that?)  I'm dealing with this situation right now.  The developers what
to use one SQL because someone told them it was better for the
connection pool cache.  I've already requested that they use SQL
statements coded specifically for the business problem instead of trying
to make it fir the technology.  If you are in the same boat my hear goes
out to you.

 

Tony Aponte

________________________________

From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Johnson, William L
(TEIS)
Sent: Tuesday, April 17, 2007 1:04 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: Performance challenge.

 

We are running a version of Matrix One's Document Management system.  My
team inherited the DBA work from another group in our organization.
This group had developed a nice process to update statistics on an
hourly basis based on a 2% threshold change to data in the base tables.
The scripts used dbms_stats to recalculate the stats.  Recently, the
scripts have had some issues and have not run for several hours - and
sure enough user complaints began to pour into the support teams.

 

I am baffled that only a 100GB allocated database can possibly require
stats to be recomputed on an hourly basis to provide adequate response
time.  While I am not an expert, this application appears to generate
its own dynamic MQL queries where we can not add hints to ensure
performance.  We are running Oracle 9.2.0.6 on Solaris hardware.

 

So, after all of that, what is my real question?  Does analyzing a table
invalidate all parsed statements in the v$sqlarea - in effect getting
rid of a poor explain plan that may be causing other queries to run
slowly?  We appear to have seen evidence of this before where an
application can provide various bind variables into a Sql statement.
Depending on who first ran the query with a wide range of values, Oracle
would generate a different access path and sit on that path until the
sql statement had been either aged out of the shared_pool or was forced
out by an alter system flush shared pool command.  (It also appears you
can not flush any active statements from the shared_pool.)

 

 

Anyone have experience with this or have some good documentation I could
read?

 

Thanks!

Bill Johnson

Other related posts: