Re: DB performance after upgrade from 9i to 11gR2

  • From: Juan Carlos Reyes Pacheco <jcdrpllist@xxxxxxxxx>
  • To: baitha@xxxxxxxxxxxxxxxxx
  • Date: Fri, 9 May 2014 19:00:23 -0400

Which version of 11g?, there is a bug on performance. it's not difficult to
identify check your SYS.AUX_STATS$ and see if they are not multiplied by
10000, in metalink you can find more information.
If you run this you can fix it.

BEGIN
FOR A IN ( SELECT PNAME,PVAL1
   FROM SYS.AUX_STATS$
  WHERE SNAME = 'SYSSTATS_MAIN'
    AND  PNAME IN ('SREADTIM','MREADTIM') ) LOOP
  IF (A.PVAL1 > 75) THEN
    DBMS_STATS.SET_SYSTEM_STATS(A.PNAME, a.PVAL1 / 10000);
    DBMS_OUTPUT.PUT_LINE('MREADTIM ALTERED TO: ' || a.PVAL1 / 10000);
  END IF;
END LOOP;
COMMIT;
END;
/



2014-05-08 2:39 GMT-04:00 Bheemsen Aitha <baitha@xxxxxxxxxxxxxxxxx>:

>  Hello DBA gurus,
>
>
>
> We recently upgraded our database from 9i to 11gR2 on Solaris 11 64-bit,
> Veritas File system  (single instance). We tested thoroughly in our QA
> environment before the upgrade. But after the upgrade we are seeing
> terrible performance in production with many of the queries. Even though
> the query plan looks good with very low cost, the queries are just sitting
> there and not finishing. Some of the queries have differences in plan (FTS
> to index scans on small tables) when compared to QA environment. We
> gathered stats with default auto sample size and default method_opt
> options. I opened a ticket with oracle on the performance issue, but I
> doubt if I can get a solution on this. The QA and prod environments are
> exactly same, but the queries are running are faster in QA. Did anyone come
> across similar  problem in the past? Any inputs are greatly appreciated.
>
>
>
>
>
> Thanks
>
> *BA*
>
>
>

Other related posts: