Re: Large index creation speed up / issues with direct path read temp / direct path write temp

  • From: Lothar Flatz <l.flatz@xxxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Sun, 5 Jul 2015 08:54:33 +0200

Hi,


/"Increassing parallelism is the option but we will not survive arch log stream with parallel 8 . (long story short)"/

it almost feels silly asking:

Are you not using "create index .... nologging"?

Am I missing something here?

Regards

Lothar

On 04.07.2015 12:05, GG wrote:

Thanks for all valuable comments .
The filesystem is UFS , I'm pretty sure sql_worakarea_active displayed 1 pass for all slaves .
From my point of view the create index process goes as follows:

db direct temp write from slaves into temporary tablespace
then
db direct temp read from temporary tablespace by QCID
and written into temp segment , is
that possible that only Query Coordinator writes into temp segment in permanent tablespace
and switches this temp segment into permanent one at the end ?

What I've noticed is p3 (or p2 not sure, but it tells about number of blocks processed by sungle direct temp operation) was like 1 and 7 sometimes .
We experimented and bumped it to 128 but looked like no performence improvement .

Today I've found below should be set:
_smm_auto_max_io_size=1024 may help (according to Oracle Sun Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1) ) though Im not on exadata :) .

Increassing parallelism is the option but we will not survive arch log stream with parallel 8 . (long story short)
I'll try with event 10033 on monday and followup hopefully .
G



W dniu 2015-07-04 o 11:36, Stefan Koehler pisze:
Hi,

@Jonathan:
You are right of course. Thank you very much for correction. I should check the Safari auto-correction more carefully as it should be "it is pretty
avoidable i guess". However the "real" data might be bigger than the 70 GB as GG is using index key compression. AFAIK this is only done on index leaf
block level and not already done by sorting in some kind of way. Am i right or are there enhancements as well?

@Lothar:
Imo increasing the parallelism can make the service vs. application wait time even more worse. 30-40 ms vs. 753 ms is pretty obvious for some disk
(and possibly HBA) queuing effect or a ZFS issue. SAN storage incites to publish only a few large LUNs, but neglects the disk queuing :-)
Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK

Jonathan Lewis <jonathan@xxxxxxxxxxxxxxxxxx> hat am 4. Juli 2015 um 11:00 geschrieben:

Stefan,

2GB memory to do 70GB index should be an easy one-pass sort the algorithm is "n-squared" viz: we might have to produce 140 streams of 1GB (allowing
for various overheads out of 2GB) but then a merge of 140 streams at (say) 1MB per reload per stream needs only 140MB of memory. With 2GB you might
be able to sort something in the order of 1TB in a one-pass.

--
//www.freelists.org/webpage/oracle-l




--




Other related posts: