Re: Temp Space performance

  • From: Juan Carlos Reyes Pacheco <jcdrpllist@xxxxxxxxx>
  • To: mvshelton@xxxxxxxxxxxxx
  • Date: Thu, 28 May 2015 17:44:10 -0400

Hello Matt, when you increase the pga oracle optimizer can improve the join
path.
There are other variables you can include in your analysis in a test case

alter system set optimizer_dynamic_sampling=4 scope=BOTH;
alter system set statistics_level=all scope=BOTH;
alter system set session_cached_cursors=2000 scope=spfile;
alter SYSTEM set temp_undo_enabled=true SCOPE=BOTH;
alter system set memory_target=2000m scope=spfile;
alter system set pga_aggregate_target=800m scope=spfile;

and if yuu have licence you can include compressing tables, and indexes in
12c(I didn't tested yet), before 12c compressing indexes is not a good idea.




2015-05-26 20:21 GMT-04:00 Matt <mvshelton@xxxxxxxxxxxxx>:

Thank you Lothar and Randolf for the responses....

I fully agree with tuning temp tablespace usage first. I have increased
PGA, going manual and have even played with the underscore settings
parallel
query using auto pga_aggregate_target. I was just trying to get every ounce
of performance out temp tablespaces....

I am running into the Hash Join Buffered sorts that you mentioned in your
article and in your presentation.

Today I tested the partition wise join methodology and have seen pga usage
go way down.

Thanks, Matt
-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Randolf Geist
Sent: Wednesday, May 20, 2015 2:56 PM
To: ORACLE-L
Subject: Re: Temp Space performance

Hi Matt,

+1 what Lothar mentioned - rather than trying to tune Temp Tablespaces I
would focus more on trying to avoid / minimize temp space usage.

In case of parallel operations there are two common scenarios why you
see increased temp space usage: The required buffering (mostly HASH JOIN
BUFFERED but could also be additional BUFFER SORTs) and inappropriate
distribution methods, like BROADCASTing a row source of significantly
underestimated size, or HASH distribution of a row source of
significantly overestimated size, leading to additional BUFFERing
requirements (see e.g.
http://oracle-randolf.blogspot.com/2012/12/hash-join-buffered.html).

Therefore I would check what causes the temp space usage - if it's the
buffering (most likely) the most efficient way to address this is making
use of full or at least partial partition wise joins, but this of course
means the data needs to be partitioned in suitable way, at least some of
the tables, so you need the partitioning option, and you need the
corresponding physical design, which isn't something that usually can be
done as a "tuning exercise".

You can also try to increase available PGA and/or using BROADCAST
instead of HASH distribution, but for large joins this is rather limited.

I've talked about the impact of buffering and the dramatic differences
the distribution / partitioning method can make, especially when it
comes to Exadata / In-Memory, in a webinar I did for
AllThingsOracle/RedGate some time ago:

http://allthingsoracle.com/oracle-exadata-in-memory-real-world-performance/

Randolf

We are running queries with parallel 8 and outer joining 10 tables.
Quite a
few of the table are a billion rows and are using HCC query high
compression. I have tuned the queries and we good there. The slowness we
are seeing is in the reading and writing to temp.

--
//www.freelists.org/webpage/oracle-l



-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4800 / Virus Database: 4311/9829 - Release Date: 05/20/15

--
//www.freelists.org/webpage/oracle-l



Other related posts: