Re: How to troubleshoot heavy RAM consumption

  • From: Tim Gorman <tim@xxxxxxxxx>
  • To: nkodner@xxxxxxxxx
  • Date: Tue, 23 Mar 2010 12:07:46 -0600


The use of expensive sorts, hash-joins, etc are not always coded into the application, so you don't necessarily have to go back to developers or power-users.

Instead, first take a look at the most-expensive SQL and see if there is a way to improve the execution plan without changing the SQL.  It is possible that the way to convert a huge SORT-MERGE join or HASH-JOIN fed by FULL table scans into an unobtrusive NESTED LOOPS join is to add an appropriate index or two.  It's not always that easy, but it is true often enough to make it worth looking into.  Often, the solution is more complex than that.  Ultimately, this usually means getting elbow-deep in the logic of the SQL statement, which can be laborious but which is usually well worth the effort.

Everyone's method vary, but instead of using a automatic formatting program like TOAD, consider simply re-formatting and "pretty-fying" the SQL statement by hand, in an editor (I like "vi").  By the time you finish re-arranging the SQL text to format the way you like, you'll be intimately familiar with the logic of the SQL statement and able to start making an intelligent and focused investigation for solutions.  I've had folks who watch me do this editing accuse me of being anal-retentive and behind the times (and I have no argument with either), but when I finish editing (i.e. big SQL statements sometimes take an hour or more) and I start discussing what the SQL statement is doing versus what the developer probably intended, the jokesters usually shut up and start listening.  And, at the end of the process, it is easier to begin testing different solutions as the logic of the SQL is fresh and familiar.

So, if you can identify the SQL that the heavy users of PGA are running, see if there is anything you can do to improve it.

Hope this helps...
Tim Gorman
consultant -> Evergreen Database Technologies, Inc.
postal     => P.O. Box 630791, Highlands Ranch CO  80163-0791
website    =>
email      => Tim@xxxxxxxxx
mobile     => +1-303-885-4526
fax        => +1-303-484-3608
Lost Data? => for info about DUDE...

Neil Kodner wrote:
I've had no problem identifying the heavy PGA users but how do I get them to either use less RAM or free the RAM back to the os any faster?  That part I'm stuck with.  Given our current system has been running for a few years, I can't imagine going back and telling the devs to get rid of the sorts, hj, etc.

On Wed, Mar 10, 2010 at 5:51 PM, Jared Still <jkstill@xxxxxxxxx> wrote:

You may want to try running the attached script pga_advice 
to see how much PGA Oracle thinks you need to meet current usage.

The script pga_workarea_active.sql will show how the memory
is being used.  It doesn't show which sessions, I don't think I 
have a query for that, but someone else posted the tables needed
if you want to figure that out.

Jared Still
Certifiable Oracle DBA and Part Time Perl Evangelist
Oracle Blog:
Home Page:

On Wed, Mar 10, 2010 at 11:42 AM, Neil Kodner <nkodner@xxxxxxxxx> wrote:
Great advice.  I'll look at sessions by PGA usage in v$process.  I didn't realize that the pga_aggregate_target was that soft of a limit.   Is there a better source for determining exactly what, at the session level, is eating up the PGA memory?

What are some factors that can contribute to sessions using, and hanging on to pga?  

As far as parallel goes, we're not doing a whole lot during business hours.  

On Wed, Mar 10, 2010 at 10:51 AM, Paul Drake <bdbafh@xxxxxxxxx> wrote:

It looks like you have some memory free in the large and java pools.
Is this running with automatic memory management?
Still, that's not in the ballpark of a pga that should be 6 GB going to 14 GB.

It appears that the max pga_aggregate used was around 14 GB.
You appear to be on the right track.
Its not that the pga_aggregate_target setting is too small, its that
it isn't being respected.
You're probably going to want to drill down into pga mem usage.
Perhaps you have a few sessions that would be better off with a manual
workarea policy.
Is there a large amount of parallel work going on?

If you hike the value for pga_aggregate_target, it might make the
problem worse as Oracle may extrapolate into using even larger amounts
of pga memory (than the peak of 14 GB).


On Wed, Mar 10, 2010 at 12:14 PM, Neil Kodner <nkodner@xxxxxxxxx> wrote:
> Our 32gb solaris 9 server running two instances of 10gr2
> One small db to support OID (sga max size 288M, pga aggregate 32M) and one
> of our production dbs(6gb pga aggregate target, 12gb sga_max_size).
> According to top, before the workday starts, we are sitting at 10-12gb
> available physical memory.  Once our users get working, that number drops to
> 500-600mb available. Top reports 29G swap in use, 11G swap free.  OEM says
> that we're performing heavy swapping and I'm not sure how to resolve this.
> We typically have 1500-1700 connections.  My gut tells me that pga might be
> too small and that sga is too large, but i'm not really able to tell.
> My v$pgastat and v$sgastat can be viewed at


Other related posts: