Re: row cache lock contention parallel insert

  • From: LS Cheng <exriscer@xxxxxxxxx>
  • To: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • Date: Sun, 20 Dec 2009 18:41:15 +0100

AWR shows row cache lock as top second wait event

I did further testing, it seems that it's the APPEND hint makes tons of row
cache queries, dc_users, dc_objects, dc_object_ids are queried thousand of
times for a simple insert of a single row. In order to reduce row cache
queries I specified the in the insert statement the partition name, this
reduced from 176502, 176419, 264684 dc_* gets to   468, 401 and 1084.

So APPEND + partition clause gives good performance.

If I dont use APPEND clause the performance is well with and without
specifying partition.


So I have a doubt, why append needs to make so many recursive queries to row
cache (even without parallel) :-?

Thanks!

--
LSC


*With APPEND only*
set serverout on size 10000
alter session disable parallel query;
alter session disable parallel dml;
exec snap_rowcache.start_snap
insert /*+ append */ into t1 select ................ t2
exec snap_rowcache.end_snap
Parameter                 Usage Fixed    Gets  Misses   Scans  Misses
Comp    Mods Flushes
---------                 ----- -----    ----  ------   -----
--------------    ---- -------
dc_segments                   0     0     100       0       0       0
0       3       3
dc_tablespaces                0     0      63       0       0       0
0       0       0
dc_tablespace_quotas          0     0       3       3       0       0
0       3       3
dc_users                      0     0 176,502       0       0       0
0       0       0
dc_objects                    0     0 176,419       0       0       0
0       0       0
dc_global_oids                0     0      28       0       0       0
0       0       0
dc_object_ids                 0     0 264,684       0       0       0
0       0       0
dc_usernames                  0     0      10       0       0       0
0       0       0
dc_database_links             0     0       4       0       0       0
0       0       0
dc_profiles                   0     0       3       0       0       0
0       0       0
dc_users                      0     0      27       0       0       0
0       0       0

*With APPEND + PARTITION*
set serverout on size 10000
alter session disable parallel query;
alter session disable parallel dml;
exec snap_rowcache.start_snap
 insert /*+ append */ into t1 partition(p1) select ................
 exec snap_rowcache.end_snap
Parameter                 Usage Fixed    Gets  Misses   Scans  Misses
Comp    Mods Flushes
---------                 ----- -----    ----  ------   -----
--------------    ---- -------
dc_segments                   0     0       2       0       0       0
0       0       0
dc_tablespaces                0     0       7       0       0       0
0       0       0
dc_users                      0     0     468       0       0       0
0       0       0
dc_objects                    1     0     401       1       0       0
0       0       0
dc_object_ids                 0     0   1,084       0       0       0
0       0       0
dc_histogram_defs            73     0     292      73       0       0
0       0       0
dc_histogram_data             0     0      52       0       0       0
0       0       0
dc_users                      0     0       3       0       0       0
0       0       0


*Without APPEND and Partition*
set serverout on size 10000
alter session disable parallel query;
alter session disable parallel dml;
exec snap_rowcache.start_snap
insert into t1 select ................ t2
exec snap_rowcache.end_snap
Parameter                 Usage Fixed    Gets  Misses   Scans  Misses
Comp    Mods Flushes
---------                 ----- -----    ----  ------   -----
--------------    ---- -------
dc_users                      0     0      74       0       0       0
0       0       0
dc_objects                    0     0       3       0       0       0
0       0       0
dc_global_oids                0     0       6       0       0       0
0       0       0
dc_object_ids                 0     0     495       0       0       0
0       0       0
dc_histogram_defs             0     0     219       0       0       0
0       0       0
dc_histogram_data             0     0      52       0       0       0
0       0       0




On Sun, Dec 20, 2009 at 6:00 PM, Greg Rahn <greg@xxxxxxxxxxxxxxxxxx> wrote:

> What does the ASH/AWR report show?
>
> If you have initial extents of 10M for each segment and you are not
> allocating numerous additional extents, then its unlikely an extent
> allocation issue, correct?
>
>
> On Sun, Dec 20, 2009 at 5:08 AM, LS Cheng <exriscer@xxxxxxxxx> wrote:
> > Since my results are not consistent I am not really sure if the problem
> is
> > caused by extent or segment space allocation.
>
> --
> Regards,
> Greg Rahn
> http://structureddata.org
>

Other related posts: