To those who replied to my earlier post and insterested: We resolved the issue with two solutions: 1. upgrade to solaris 10. The LF behavior goes back to earlier version. --this makes the individual latch get as fast as 9205 version. 2. or, set _large_pool_min_alloc = 64k. This does the magic even though database still running on solaris 8. --this greatly reduced the latch allocation#. so the issue is gone. On 8/6/07, Zhu,Chao <zhuchao@xxxxxxxxx> wrote: > > maybe not many people use MTS recently due to nearly-free RAM. But when > connection# goes real high, you still have to, and we are such a user. > We experienced huge "latch:shared pool" contention in 10.2 MTS > environment, while on 9i it was fine. And we *guess* it is mainly from > sort/hash, which will allocate memory/deallocate from large pool, and > large pool is managed by shared pool latch. > From statspack, the top functions that called shared pool latch is: > kghalo, and kghfre. > Also we are running with sort-area-size. no work-area-size-policy auto > (though 10g start to support that). > Anyone has experience with MTS/shared pool latch? > > > -- > Regards > Zhu Chao > www.cnoug.org > -- Regards Zhu Chao www.cnoug.org