Re: MMON_SLAVE seems high CPU with "select longname from javasnm$ where short = :1"

  • From: "Radoulov, Dimitre" <cichomitiko@xxxxxxxxx>
  • To: kibeha@xxxxxxxxx, oracle-l <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 1 Dec 2015 10:17:20 +0100

Hi,
some hints:

High Database Traffic Seen In Table javasnm$ (Doc ID 2060224.1)
Bug 14755810 : LIBRARY CACHE PIN ON SELECT LONGNAME FROM JAVASNM$ WHERE SHORT = :1 (92 - Closed, Not a Bug)


Regards
Dimitre

On 01/12/2015 09:53, Kim Berg Hansen wrote:

Hi, List

We have a database Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production on Oracle Linux Server release 6.5 with 4 cores.

One statement seems to practically continously use 20% of a core when I look at top activity in Cloud Control (or Enterprise Manager or whatever it is called):

selectlongname
fromjavasnm$
whereshort=:1

All of it executed by a single session with Module=MMON_SLAVE, Action=JAVAVM JIT slave action.

We do use some Java and XML in the database, but I do not believe it to be huge amounts.
Besides, I would think then it would be executed in user processes and not this single background process?

I have tried to enable trace for the session in order to examine bind variable content to find out which Java identifiers are being looked up.
But no trace file is created in the usual place? (The session also has AudSid=0 and AudSid usually is part of trace file name as far as I know.)

Q1: Are trace files from background processes placed in other directory? If so, where can I find which directory?


The wait event for the session seem to be JOX Jit Process Sleep.
As far as I can tell it is an idle event.
MOS note 1075283.1 describes it might show up in top list as a bug in 11.1 that should be fixed in 11.2.

Q2: Is it possible the 20% of a core CPU expenditure shown in Cloud Control is not actually CPU expenditure but wrongly reported?


Q3: Any suggestions for how to dig into what actually is happening with that statement?
(Keeping in mind I am primarily a developer ;-)


Thanks in advance for any hints I might use for further digging.


Regards


Kim Berg Hansen

http://www.kibeha.dk
kibeha@xxxxxxxxx <mailto:kibeha@xxxxxxxxx>
@kibeha <http://twitter.com/kibeha>


Other related posts: