9.2.0.5 <http://9.2.0.5> Linux In our statspack reports over a 15 minute period , we are getting waits on enqueue (about 880 seconds over 15 minutes with quite a lot of sessions - about 55% of the time events Avg Total Wait wait Waits Event Waits Timeouts Time (s) (ms) /txn ---------------------------- ------------ ---------- ---------- ------ -------- enqueue 3,132 1,868 881 281 44.2 and the enqueue section has -> Enqueue stats gathered prior to 9i should not be compared with 9i data -> ordered by Wait Time desc, Waits desc Eq Requests Succ Gets Failed Gets Waits Time (ms) Time (s) -- ------------ ------------ ----------- ----------- ------------- ------------ CF 1,985 1,985 0 328 5.17 2 TX 49,187 49,187 0 16 20.88 0 XR 293 293 0 293 .94 0 HW 41 41 0 41 .71 0 FB 47 47 0 42 .62 0 PS 122 122 0 26 .85 0 JQ 12 6 6 6 1.00 0 Looking at this select * from v$enqueue_stat where total_wait# > 0 order by CUM_WAIT_TIME The highest enqueues are INST_ID EQ TOTAL_REQ# TOTAL_WAIT# SUCC_REQ# FAILED_REQ# CUM_WAIT_TIME ---------- -- ---------- ----------- ---------- ----------- ------------- 1 PS 271770 171064 268304 3466 157939 1 CF 4565455 783578 4564846 608 1793046 1 TX 10172061 220417 10172014 47 59668034 Now, TX is obviously highest by a long way, we were doing a lot of heavy inserts / updates into a number of tables, so I was thinking initrans was set incorrectly giving us ITL waits So I do Select distinct b.owner, b.object_name, b.object_type, a.value from v$segstat a, v$segment_statistics b where a.obj# = b.obj# and b.object_namein (select object_name from dba_objects) and a.statistic_name = 'ITL waits' and a.value > order by a.value and I am seeing indexes with between 1 and 70 ITL waits for the indexes concerened. What I dont know is, is that a lot of waits to warrant changing inittrans? Thanks