I know there's been a lot of comments on this - but we had a similar issue
this week on our ADG standby so I opened a case with Oracle.
They came back with this - which is interesting because the application
issues pretty much the same SQL all the time - so why would the SQL be
invalid/unable to parse? Odd.
But I set the event and have been looking to see if it recurs.
I point this out because maybe you have the same issue with SQLs that are
somehow "fubar" and Oracle isn't able to parse them - I had no idea that
bad parsing could cause shared pool contention.
Customer Proposed Action Plan
1) Failed Parses
Please execute the following actions:
1) Set event 10035
ALTER SYSTEM SET EVENTS '10035 trace name context forever, level 1';
2) When the event is set, any statement that fails to parse as a result of
an error, will be documented in the alert.log
3) Fix the application to issue valid SQL.
4) For details refer Doc ID 1353015.1
This database instance is spending too much time in failed parse activity.
Failed parses when massive can lead the database to an unexpected behavior,
usually related to mutexes.
There is a high parse time activity in this database and 16.68(%) is
accounted to failed parses.
How to Identify Hard Parse Failures ( Doc ID 1353015.1
High Waits for 'library cache lock' and 'library cache: mutex X' Due to
Parse Failures When Using JDBC ResultSet.TYPE_SCROLL_SENSITIVE ( Doc ID
Troubleshooting 'cursor: pin S wait on X' waits. ( Doc ID 1349387.1
On Mon, Sep 20, 2021 at 3:25 PM Pap <oracle.developer35@xxxxxxxxx> wrote:
Hi , We have a customer application in which we see high wait events like
'cursor:mutex ' and 'library cache lock' for a select query occasionally
and thus a specific functionality impacted. This select query(which is part
of a plsql procedure) is quick query which runs ~5 million times/hr. But
even though number of execution is same mostly throughout the day, it still
went through these odd wait events making the per execution time went
higher for around ~15 minutes duration causing slowness. And during this
period, the ASH shows fro this query, the value of column in_hard_parse as
'N' but in_parse as 'Y' and 'N' both. And we saw we were having stats
gather running on that base object during same time. We have no_invalidate
set as 'FALSE" as table stats preference, So wanted to understand from
experts, can it be really because of 'parsing' issue and we should delete
this no_invalidate preference so that it can inherit the default global
preference i.e no_invalidate=>auto? The database version is 19C.