Re: Large index creation speed up / issues with direct path read temp / direct path write temp

  • From: Stefan Koehler <contact@xxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx, grzegorzof@xxxxxxxxxx
  • Date: Sat, 4 Jul 2015 10:09:50 +0200 (CEST)

Hi GG,
multipass sorts should be avoided in general, but with your settings (70 GB vs.
2 GB) it is pretty unavoidable i guess.


waits, as AWR reported direct path read temp as top wait (second was direct
path write temp) with 753ms per I/O. When trying to confirm that on the
storage side we concluded that storage reported only 30-40 ms waits during
that period.

This is the most common issue when Oracle DBAs are talking with storage guys:
Service time vs. Host/Application wait time. The following blog post
demonstrates this with Linux, but it also applies to Solaris in a different
way: https://bartsjerps.wordpress.com/2011/03/04/io-bottleneck-linux/
Unfortuantely you have not mentioned your used filesystem (ZFS?) or storage
solution (ASM?).


I've checked sysmetric during the slow index creation and it reported like
40MB/s reads and 30MB/s writes whereas the storage can easy sustain
400MB/s and the whole server was like 96% idle.

Yes, i guess the 400 MB/s are possible in a constant data stream, right?
However this could also be an issue with ZFS settings (Copy on Write), but we
just don't know enough about your environment here.


What I'm looking for is the way to deep dive / troubleshoot the issue. Any
ideas ? Short stack profiling during the issue , some events to turn
on?

You can use event 10032 & 10033 to trace the sort activity, but i guess you
want to troubleshoot the I/O performance, right? As you are running on
Solaris - go with DTrace. You can do anything with DTrace from Oracle to ZFS.
Brendan Gregg is always a great starting point for DTrace:
http://dtrace.org/blogs/brendan/2012/12/29/zfsday-zfs-performance-analysis-and-tools/

Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK

GG <grzegorzof@xxxxxxxxxx> hat am 4. Juli 2015 um 09:16 geschrieben:


Hi,

when dealing with relatively large (70GB ) index creation (composite
with 3 cols first low cardinality only 4 values for 2bilions of rows and
compress was 2)
with parallel 4 which took about 4hours to complete on moderate hardware
(Solaris M4000 48 cpu + 3PAR storage ) I've observed strange issue with
direct path read temp
direct path write temp


waits, as AWR reported direct path read temp as top wait (second was
direct path write temp) with 753ms per I/O .
When trying to confirm that on the storage side we concluded that
storage reported only 30-40 ms waits during that period.
I've checked sysmetric during the slow index creation and it reported
like 40MB/s reads and 30MB/s writes
whereas the storage can easy sustain 400MB/s and the whole server was
like 96% idle .

So for me it looks like above waits may include some additional work
into instrumentation no i/o related.

All on ORacle Solaris EE 11.2.0.3 latest PSU.
We used manual workareas, with below tweaks supposed to speed the process up

alter session set workarea_size_policy=MANUAL;
alter session set db_file_multiblock_read_count=512;
alter session set sort_area_size=2G ;

What I'm looking for is the way to deep dive / troubleshoot the issue .
Any ideas ? Short stack profiling during the issue , some events to turn
on ?

From v$sql_workarea_active I was able to see that temp_segsize was
growing really slow and v2 sort was used (maybe v1 is better choice ?) .

Regards
GG




--
//www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l


Other related posts: