Hi Goti,
What query did you use to return the results you shared in the first email?
The reason I ask is that it looks like you are returning the current_obj#
as the object but this is not valid when the event is null. You should be
joining to the v$sql_plan row and using that. It's easy to go down the
wrong rabbit hole based on misleading information.
The join conditions between customer and market_segment_mapping are quite
interesting:
INNER JOIN terr.market_segment_mapping MSM
ON ( ( C.customer_group = MSM.customer_group
AND C.industry_code = MSM.industry_code
AND C.industry_code_5 = MSM.industry_code_5 )
OR ( C.customer_group = MSM.customer_group
AND C.industry_code = MSM.industry_code
AND MSM.industry_code_5 IS NULL )
OR ( C.customer_group = MSM.customer_group
AND MSM.industry_code IS NULL
AND MSM.industry_code_5 IS NULL ) )
AND MSM.eff_date <= :B1
AND MSM.end_date > :B1
(taken from the NOT IN subquery)
The plan says you're joining to market_segment_mapping in the NOT IN subquery
using an index on MARKET_SEGMENT_ID: this, on average, finds 60 rows per
loop which escalates to 4 million rowids in the 5 minute sample. There is
an extra join condition that is repeated in all of the OR branches -
C.customer_group
= MSM.customer_group, If you had an index on market_segment_mapping which
covered both of these conditions you should find this effort is reduced
quite a lot (you might need to factorize out this condition for the
optimizer).
In the main subquery, you're spending a lot of CPU time doing the hash join
for this condition too. You can see that it is using COMPANY_ID and COUNTRY to
create the hash buckets, the CUSTOMER_GROUP condition is evaluated as a
filter rather than an access, meaning you are comparing more rows than
necessary to each other and probably using more memory to do it. Here, all
you need to do is factorize out the condition. If that is not enough then
you may want to put effort into expanding out the other conditions you are
OR'ing between.
There is a worse row index used on line 46 of the plan though, here each
row driving this loop causes 8 index range scans on average (due to the
number of partitions it needs to read from) and then the filter leads to
about 1000 rows per partition - in the 5 minute sample you obtain 447
million rowids that all need to be accessed in TERR_POSTAL. Eventually this
gets filtered down to 60898 rows (the number of rows from the driving row
source). Judging by the table name, the postal_code join filter probably
provides extremely good selectivity, so an index on that will probably give
you better performance. If you need to include additional columns in the
index to provide this selectivity (maybe terr_id and country?) then do so.
Also consider whether it's worth making it a global index - if postal_code is
supposed to be unique in terr_postal then it would be sensible to make it a
global unique index.
tl;dr
- Change both joins between customer and market_segment_mapping so
customer_group is outside the parentheses.
- Create index on market_segment_mapping
(market_segment_id, customer_group)
- create index on TERR_POSTAL ( terr_id , postal_code )
Hope this helps,
Andrew
On Wed, 25 Aug 2021 at 14:39, Lothar Flatz <l.flatz@xxxxxxxxxx> wrote:
Ok, the short answer is that there should be a condition applied in step
26 on Terr_Postal which is estimated as too selective.
IT is possible that this is the source of your trouble. Find out why it is
wrong and correct the stats if possible.
Sorry that I am a bit short, but I am under some time pressure.
Regards
Lothar
Am 25.08.2021 um 13:12 schrieb Goti:
Thanks Laurentiu for responding. Please find the below link which contains
the sql monitor report. This was taken 5 minutes after it was started.The
SQL ran for more than 5 hours and hence we had to cancel that. I don't see
SQL in RTSM history too. I am not sure how far this will be useful.
https://gist.githubusercontent.com/aryangoti/eae0776d88d46361c2c60d0d6b083680/raw/ec1ac6a6359ca3f99c6f7216850dfa09103b6d78/gistfile1.txt
Thanks,
Goti
On Wed, Aug 25, 2021 at 4:07 PM Laurentiu Oprea <
laurentiu.oprea06@xxxxxxxxx> wrote:
Hello,
Can you attach a sql monitor report?
În mie., 25 aug. 2021 la 12:58, Goti <aryan.goti@xxxxxxxxx> a scris:
Hi All,
We have a SQL performance issue and as per ASH the SQL is spending most
of its time on CPU. Not sure which operation is taking more time. Can
someone guide me how to troubleshoot what is causing the SQL to spend time
on the CPU?
SQL_ID EVENT SESSION MODULE
PCT
------------------ ------------------------------------ -------
------------------------------------ -------
fyfnh1pnxjg2d ON CPU PL/SQL
Developer 91.9% <<<<<<<<<<<<
OBJ OTYPE OID ONAME
SQL_ID CLASS PCT
--------------------------- --------------------------- ----------
------------------------------------ ------------------ ------------------
----------
XIE1CUST_TERR INDEX PARTITION 106842
XIE1CUST_TERR fyfnh1pnxjg2d data block
0%
XFK1PARTITION_TERR_POSTAL INDEX PARTITION 107430
XFK1PARTITION_TERR_POSTAL fyfnh1pnxjg2d data block
.2%
XFK1PARTITION_TERR_POSTAL INDEX PARTITION 107435
XFK1PARTITION_TERR_POSTAL fyfnh1pnxjg2d
95.5%
XIE1CUST_TERR INDEX PARTITION 106843
XIE1CUST_TERR fyfnh1pnxjg2d data block
1.6%
XIE1CUST_TERR INDEX PARTITION 106841
XIE1CUST_TERR fyfnh1pnxjg2d data block
.1%
XIE4CUST_TERR INDEX PARTITION 106911
XIE4CUST_TERR fyfnh1pnxjg2d free list
0%
fyfnh1pnxjg2d 0%
XIE1CUST_TERR INDEX PARTITION 106839
XIE1CUST_TERR fyfnh1pnxjg2d data block
2.3%
XIE1CUST_TERR INDEX PARTITION 106838
XIE1CUST_TERR fyfnh1pnxjg2d data block
.3%
Execution Plan Details:
https://gist.githubusercontent.com/aryangoti/7f0bc85cbe6df372e488deecbfdf30ef/raw/40598552b0a5c47c6857c86b06eabbca2d5daf8a/gistfile1.txt
Thanks,
Goti