in such case, I think dump a systemstate, using ass109.awk will find out the blocker, then you can do a sql trace on the block session, figure out what the blocker is doing and why keep holding the row cache latch. that would be the cause. to do a systemstate dump, oradebug setmypid; oradebug dump systemstate level 0; oradebug tracefile_name; google ass109.awk for the script awk -f ass109.awk <tracefile> On Tue, Oct 11, 2011 at 10:47 PM, GG <grzegorzof@xxxxxxxxxx> wrote: > Hi, > on my 4node 10.2.0.3 RAC database I've got hangs from time to time when > I'm checking waiting session got info like that (its from Tanel @sw > output): > > |356 WAITING latch: row cache objects 62929 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 337 WAITING latch: row cache objects 38039 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 312 WAITING latch: row cache objects 7654 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 418 WAITING latch: row cache objects 11547 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 709 WAITING latch: row cache objects 20731 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 660 WAITING latch: row cache objects 17282 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 652 WAITING latch: row cache objects 43618 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 584 WAITING latch: row cache objects 7606 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 487 WAITING latch: row cache objects 3039 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 478 WAITING latch: row cache objects 3435 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 752 WAITING latch: row cache objects 39295 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 465 WAITING latch: row cache objects 757 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 460 WAITING latch: row cache objects 3596 > 0 address= number= 199 tries= 0 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 413 WAITING latch: row cache objects 38836 > 0 address= number= 199 tries= 3 > 0x256B21280: row cache objects[c16] > > 0x0000000256B21280 > > 743 WAITING library cache lock 12366 > 546 handle address= lock address= 100*mode+namespace > > 0x00000001BFEB6788 0x00000001FCA6FF68 = 301 > > 346 WAITING library cache pin 477 > 195 handle address= pin address= 100*mode+namespace > 0x0000000110A0FBC8 > > > > So the address seems to be the same ||256B21280|, how can I go deeper and > find out whos blocking/overusing > rowcache id 16 > 16 PARENT dc_histogram_defs > > 16 SUBORDINATE 0 dc_histogram_data > > 16 SUBORDINATE 1 dc_histogram_data > > Thought maybe latchprof with addr could help , but not sure . > Generaly how to find root cause for that . > Regards > GregG > > > > > ---------------------------------------------------------------- > Twoj wlasny dom za 675 zl/m-c! > http://linkint.pl/f2a56 > -- > //www.freelists.org/webpage/oracle-l > > > -- Regards Sidney Chen -- //www.freelists.org/webpage/oracle-l