Performance challenge.

  • From: "Johnson, William L \(TEIS\)" <WLJohnson@xxxxxxxxxxxxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 17 Apr 2007 13:03:40 -0400

We are running a version of Matrix One's Document Management system.  My
team inherited the DBA work from another group in our organization.
This group had developed a nice process to update statistics on an
hourly basis based on a 2% threshold change to data in the base tables.
The scripts used dbms_stats to recalculate the stats.  Recently, the
scripts have had some issues and have not run for several hours - and
sure enough user complaints began to pour into the support teams.

 

I am baffled that only a 100GB allocated database can possibly require
stats to be recomputed on an hourly basis to provide adequate response
time.  While I am not an expert, this application appears to generate
its own dynamic MQL queries where we can not add hints to ensure
performance.  We are running Oracle 9.2.0.6 on Solaris hardware.

 

So, after all of that, what is my real question?  Does analyzing a table
invalidate all parsed statements in the v$sqlarea - in effect getting
rid of a poor explain plan that may be causing other queries to run
slowly?  We appear to have seen evidence of this before where an
application can provide various bind variables into a Sql statement.
Depending on who first ran the query with a wide range of values, Oracle
would generate a different access path and sit on that path until the
sql statement had been either aged out of the shared_pool or was forced
out by an alter system flush shared pool command.  (It also appears you
can not flush any active statements from the shared_pool.)

 

 

Anyone have experience with this or have some good documentation I could
read?

 

Thanks!

Bill Johnson

Other related posts: