Hi Jeff,
I think you will more likely find its a bug, if you compare oracle counters
eg v$pgastat and OS view eg top and pmap -x, they may not match up
I had something similar before, A REGEXP_REPLACE and REGEXP_SUBSTR that
would leak memory until the host hung
https://community.oracle.com/thread/2465164?start=0&tstart=0
Tom
On Thu, Oct 1, 2015 at 5:29 PM, Jeff Chirco <backseatdba@xxxxxxxxx> wrote:
Thank you. The interesting thing is that this has been working with 300
locations just fine for a long time and we haven't made any changes
recently. Ok actually two weeks ago I changed memory_target from 2gb to
4gb.
On Thu, Oct 1, 2015 at 1:47 AM, Stefan Koehler <contact@xxxxxxxx> wrote:
Hi Jeff,
Why would the database use more memory that has been set to use.
Because memory_target does not control everything - for example
structures like PL/SQL collections and PL/SQL tables. Oracle 12c has some
official
functionality to limit these as well, but you are still on 11g. Oracle
11g only got some undocumented event to do this (10261 -
http://www.oracle.com/technetwork/database/features/availability/exadata-consolidation-522500.pdf
).
Tanel Poder has written some great blog post about drilling down into PGA
memory issues:
http://blog.tanelpoder.com/2014/03/26/oracle-memory-troubleshooting-part-4-drilling-down-into-pga-memory-usage-with-vprocess_memory_detail/
You can crosscheck it when you have reached the 250 locations. The
problem descriptions sounds like some of these locations are causing this
(memory
leak?) or software/data bug?
Best Regards
Stefan Koehler
Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK
Jeff Chirco <backseatdba@xxxxxxxxx> hat am 1. Oktober 2015 um 03:05geschrieben:
today. I have the following parameter set.
We are having some major memory issues with one of our databases
before it took down the server. We think we found the application that
memory_max_target = 8g
memory_target=4g
11.2.0.4 EE database on Windows Server 2008R2
So the database is only supposed to use 4gb but it got up to 84gb
caused
but not the database reason. We have 300 external sites that push usdata every minute. We didn't realize for half the day that it wasn't
working.
Around noon apparently it started working and I was able to see thatthe Windows service for that database was allocated 84gb before it killed
the
entire server.allow each location to upload 1 at a time, then 5, 10, 25, everything was
Once we discovered that this service started working we decided to
fine
until we got to 250 locations. Then all of a sudden the memory jump upreally fast to 40gb before I manually stopped the database and we stopped
all external loads.have been trying to go through ASH and AWR but it is hard to get a good
Why would the database use more memory that has been set to use. I
picture
because we keep on having to restart the database. Tomorrow morning weare going to work on it again.
I am really lost on this so if anybody can offer any suggestions Iwould appreciate it.
unplanned downtime for their site. Maybe this is a bigger issue that I
I was going to create a SR but apparently Oracle is having an
thought :)