How are you working out the connection between the row cache latches and
this statement ? ASH or an ordinary SQL_TRACE ?
If you have some detailed information about the wait you can check the p1,
p2 parameters for the wait to determine exactly which latches are being
acquired most frequently. Ideally a raw trace file showing a chain of these
inserted would help to pinpoint whether or not there is a pattern that
works through a series of latches would be best.
Thoughts:
1) If the code does one or two inserts, does something else, then commits,
then starts the next couple of inserts the session will be re-allocating
the temporary segment on each little batch of inserts. This would be
consistent with the pattern of latching you've reported.
2) Is there a mismatch between the character set of the client and the
characterset of the server
3) Does the client issue a parse call on every execution
4) Is the client connected as the owner of the table GTT, or is there a
public synonym for the table with the client connected to a schema other
than the table owner
5) Are you using a temporary tablespace group rather than a simple
temporary tablespace
6) Do different client connections log in to different schemas, and if so
are different tablespaces associated different schemas
Option 1 looks like the most likely candidate for your issue.
Options 2 to 6 are just ideas that MIGHT introduce extra latching.
Regards
Jonathan Lewis
On Thu, 1 Oct 2020 at 09:04, Thomas Kellerer <thomas.kellerer@xxxxxxxxxx>
wrote:
Hello,
we have a strange performance problem in one of our test environments
running Oracle 12.1 (to be upgraded soon, but not soon enough to ignore the
problem).
A simple INSERT into a global temp table causes excessive "latch: row
cache objects" wait events (contributing > 80% of the DB time in the AWR
report).
insert into lookup_table (lookup_value) values (:1 );
The table definition is pretty simple:
CREATE GLOBAL TEMPORARY TABLE LOOKUP_TABLE
( "LOOKUP_VALUE" VARCHAR2(255 CHAR),
PRIMARY KEY ("LOOKUP_VALUE") ENABLE
) ON COMMIT DELETE ROWS;
The temp table is used later in a different statement as a replacement for
a large IN list. But the query _using_ the temp table never suffers from
those wait events.
The dictionary stats from the AWR report show this:
+----------------------+--------------+----------+-----------+----------+----------+-------------+
| Cache | Get Requests | Pct Miss | Scan Reqs | Pct Miss |
Mod Reqs | Final Usage |
+----------------------+--------------+----------+-----------+----------+----------+-------------+
| dc_awr_control | 78 | 0.00 | 0 | |
2 | 1 |
| dc_files | 64 | 0.00 | 0 | |
0 | 4 |
| dc_global_oids | 9,085 | 0.00 | 0 | |
0 | 439 |
| dc_histogram_data | 492,981 | 0.05 | 0 | |
0 | 6,631 |
| dc_histogram_defs | 740,310 | 0.05 | 0 | |
353 | 10,124 |
| dc_objects | 541,488 | 0.06 | 0 | |
240 | 4,697 |
| dc_profiles | 769 | 0.00 | 0 | |
0 | 2 |
| dc_props | 3,829 | 0.00 | 0 | |
0 | 6 |
| dc_rollback_segments | 16,087 | 0.00 | 0 | |
0 | 303 |
| dc_segments | 248,107 | 0.03 | 0 | |
28 | 4,664 |
| dc_sequences | 2,249 | 0.09 | 0 | |
2,249 | 15 |
| dc_tablespaces | 548,284 | 0.00 | 0 | |
0 | 13 |
| dc_users | 817,470 | 0.00 | 1,780 | 0.00 |
0 | 415 |
| outstanding_alerts | 8 | 0.00 | 0 | |
0 | 3 |
| sch_lj_oids | 164 | 3.66 | 0 | |
0 | 36 |
+----------------------+--------------+----------+-----------+----------+----------+-------------+
I have found references in MOS that "latch: row cache objects" might be
related to the DB user having system privileges. The user did have the
SELECT_CATALOG_ROLE role granted, so I revoked that but to no avail.
MOS also contains documents that state that this could happen with VPD
enabled, which we don't have. But I wonder if a global temp table uses that
somehow to isolate the content of the table between sessions.
The server has 32 CPUs and the AWR report shows a load of 5 seconds DB
Time per second (so it's not CPU bound), 37 transactions/s and 360
executes/s
I don't have any ideas on how to investigate this further.
Any ideas on what I could investigate further to find the root cause?
Thanks
Thomas
--
//www.freelists.org/webpage/oracle-l