OK, because I also end up in this common position, "If I could redesign this and have a dream system where I made all the decisions..etc, etc." I have a lot of code that requires heavy reads/writes to temp, so I offer up temp tablespace groups to assist along with what other might offer... The trick is to use parallel, (yes, on non-partitioned tables, bizarre concept, I know...) hinted in your sql, matched to the number of temp tablespaces you are using for your temp tablespace group. This will allow you to "stripe" the work across multiple temp tablespaces and speed up the temp read and writes, eliminating a lot of these waits. If you are using straight DML statements, you will need to remember to enable parallel dml vs. simply using the parallel on the selects. Although I've always been one to feel, "tune the query, fight the swap to temp!", due to the size of the data we work with here, I've had to adapt and when left with no choice but to use temp heavily, find ways to do so more effectively. Let's say you choose to create four temp tablespaces and incorporate them into a temp tablespace group, called AGG_TEMP. You can then set this temp to the specified user that will be responsible for executing the scripts within the database. ALTER USER <AGG_USER> TEMPORARY TABLESPACE AGG_TEMP; Once you've co simply make the following changes to your sql: insert into <table_name> select /*+ PARALLEL (tbl_alias 4) */ col1, agg(col2).... so on and so forth If you have tuned your queries like you have claimed, it will evenly distribute the work across the four temporary tablespaces in the tablespace group... :) It's worth a shot... Kellyn Pedersen Sr. Database Administrator I-Behavior Inc. http://www.linkedin.com/in/kellynpedersen www.dbakevlar.blogspot.com --- On Wed, 9/29/10, Mark Strickland <strickland.mark@xxxxxxxxx> wrote: From: Mark Strickland <strickland.mark@xxxxxxxxx> Subject: Fasted Way to Unload Data - Straight SQL query vs. PL/SQL and dbms_output vs. PL/SQL and utl_file To: oracle-l@xxxxxxxxxxxxx Date: Wednesday, September 29, 2010, 10:49 AM Oracle 10gR2 on Windows 2008R2 with Cygwin and Bash Scripts, 4 CPUs and 4-GB RAM I've written a horrible set of ETL bash scripts for a customer project and the final script does the "transforms". It's a set of 14 queries that each join 10-12 tables and write the output in pipe-delimited format. Lines are up to 1000 bytes wide. I've set DOP on the tables to 16 in order to get more PGA for hash joins. The queries themselves seems to run reasonably fast (I've tuned them as well as I can, I think) but the output is the problem. I have to concatentate '|' between the fields because the next step in the process requires that. Straight SQL output can write about 775-KB per second to disk. I got a PL/SQL procedure from Laurent Schneider's blog that uses dbms_output and, once it starts writing, it writes about 3.5-MB per second, a vast improvement. However between the time when the query heavy lifting "appears" to have completed (buffer gets stop increasing, TEMP tablespace writes drop to nothing, USERS tablespace reads drop to nothing) and when the output starts writing, several hours pass. I'm assuming that it is buffering all the output (about 8-Gb) before it starts writing. I see a lot of writes to the Windows pagefile, but little or no writes to the TEMP tablespace. There certainly isn't enough memory to buffer 8-GB. I will also try the utl_file approach. So, I'm looking for at least anecdotal evidence for the best approach here. I'm under a very tight deadline and am doing as much of my own testing as I can get done, but I'm hoping that others in the list will know right off how to approach this the best way. Thanks in advance. Mark Strickland Seattle, WA '