On 10/31/19 2:36 PM, Chris Taylor wrote:
That literally cannot make it better if the plans are identical. :/Speaking of collecting statistics and dynamic sampling, those are two different methods to achieve the same goal: give optimizer enough data so that it can optimize your SQL. Gathering stats using DBMS_STATS is horrendously expensive and time consuming. DBMS_STATS.AUTO_SAMPLE_SIZE means that the optimizer will use the new algorithm for the number of distinct values and that it will read the entire table. Now, I work for a company which occasionally encounters tables larger than 1TB. Fancy reading the entire table to collect stats! Of course, the table is partitioned by the date column. Now, when the optimizer figures out that the maximum date in the table is larger than the maximum date in a histogram, it stops using histogram. You have to redo your stats all over again. You can deal with the calamity in several ways:
Its more likely the results are cached or other executions experienced some type of wait event that slowed them down.
You can look at DBA_HIST_ACTIVE_SESS_HISTORY for the SQL_IDs and session_id/session_serial#s and look at EVENT, WAIT_CLASS, WAIT_TIME, SESSION_STATE, BLOCKING_SESSION_STATUS, BLOCKING_SESSION, BLOCKING_SESSION_SERIAL#.
It could ALSO be DYNAMIC SAMPLING slowing it down - do you have stats on the tables involved in the query and how old are they (are they stale)? What is your optimizer_dynamic_sampling parameter set to?