Re: database using more memory than allocated

  • From: Tom Dale <tom.dale@xxxxxxxxxxxx>
  • To: backseatdba@xxxxxxxxx
  • Date: Mon, 5 Oct 2015 14:25:09 +0100

Hi Jeff,

I think you will more likely find its a bug, if you compare oracle counters
eg v$pgastat and OS view eg top and pmap -x, they may not match up

I had something similar before, A REGEXP_REPLACE and REGEXP_SUBSTR that
would leak memory until the host hung

https://community.oracle.com/thread/2465164?start=0&tstart=0

Tom





On Thu, Oct 1, 2015 at 5:29 PM, Jeff Chirco <backseatdba@xxxxxxxxx> wrote:

Thank you. The interesting thing is that this has been working with 300
locations just fine for a long time and we haven't made any changes
recently. Ok actually two weeks ago I changed memory_target from 2gb to
4gb.

On Thu, Oct 1, 2015 at 1:47 AM, Stefan Koehler <contact@xxxxxxxx> wrote:

Hi Jeff,

Why would the database use more memory that has been set to use.

Because memory_target does not control everything - for example
structures like PL/SQL collections and PL/SQL tables. Oracle 12c has some
official
functionality to limit these as well, but you are still on 11g. Oracle
11g only got some undocumented event to do this (10261 -

http://www.oracle.com/technetwork/database/features/availability/exadata-consolidation-522500.pdf
).

Tanel Poder has written some great blog post about drilling down into PGA
memory issues:

http://blog.tanelpoder.com/2014/03/26/oracle-memory-troubleshooting-part-4-drilling-down-into-pga-memory-usage-with-vprocess_memory_detail/

You can crosscheck it when you have reached the 250 locations. The
problem descriptions sounds like some of these locations are causing this
(memory
leak?) or software/data bug?

Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK

Jeff Chirco <backseatdba@xxxxxxxxx> hat am 1. Oktober 2015 um 03:05
geschrieben:

We are having some major memory issues with one of our databases
today. I have the following parameter set.

memory_max_target = 8g
memory_target=4g

11.2.0.4 EE database on Windows Server 2008R2

So the database is only supposed to use 4gb but it got up to 84gb
before it took down the server. We think we found the application that
caused
but not the database reason. We have 300 external sites that push us
data every minute. We didn't realize for half the day that it wasn't
working.
Around noon apparently it started working and I was able to see that
the Windows service for that database was allocated 84gb before it killed
the
entire server.
Once we discovered that this service started working we decided to
allow each location to upload 1 at a time, then 5, 10, 25, everything was
fine
until we got to 250 locations. Then all of a sudden the memory jump up
really fast to 40gb before I manually stopped the database and we stopped
all external loads.
Why would the database use more memory that has been set to use. I
have been trying to go through ASH and AWR but it is hard to get a good
picture
because we keep on having to restart the database. Tomorrow morning we
are going to work on it again.
I am really lost on this so if anybody can offer any suggestions I
would appreciate it.

I was going to create a SR but apparently Oracle is having an
unplanned downtime for their site. Maybe this is a bigger issue that I
thought :)





Other related posts: