I think that's part of it but the table & partitions experiencing the
partition exchange aren't in use at the time the partition exchange occurs.
So, we have a view that uses either TABLE1 or TABLE2. Prior to switching
the view, we build the new partitions & data in the non-used table - once
that table is built, the view is switched to use the newly modified data
and then the apps use the View.
I confirmed that the table experiencing the DDL had no open sql against it
which would seem to rule out invalidating a lot of SQL.
Definitely seems to have been an issue with one (or more) specific subpools.
Querying the subpools while the jobs were not running shows 7 pools - each
with large amounts of free space.
Chris
On Fri, Jun 1, 2018 at 7:34 AM, Rich J <rjoralist3@xxxxxxxxxxxxxxxxxxxxx>
wrote:
On 2018/05/31 11:29, Chris Taylor wrote:
Background:
We've got a database with 4TB of RAM and a 260GB shared pool. Everything
was running normally until 2 nights ago when a regularly scheduled job that
does TONS of DDL suddenly blew up the shared pool.
Trying to query x$ksmsp for shared pool sizes is also causing huge amounts
of latch waits.
See Tanel Poder's blog about querying x$ksmsp on a live instance --
definitely not recommended.
Also, could you be experiencing large amounts of invalidations with the
DDL that's causing your contention? I had severe issues with a 11GB
shared pool (AMM -- turned that noise off a long time ago) hanging the
instance for 10m while stats collection invalidated 250K bindless SQLs in
the pool.
Just a thought...
Rich