That particular detail depends on the architecture of the machine. With
the advent of the multi-core processors, like the latest generations of
Intel Xeon E5 family or AMD Ryzen family there is certainly enough CPU
power to perform I/O requests in parallel. Unix and Linux kernel
generally have ligh-weight kernel threads that perform that kind of
work. I am not sure about Solaris machines, but Linux machines usually
use 3 types of I/O interfaces: SATA, for JBOD, FC/AL and iSCSI, with the
last one becoming increasingly popular. SATA is not really well suited
for async I/O because it can only get up to 6GB/sec and gets saturated
very easy. The latest versions of FC/AL adapters, running up to 128
Gb/sec speeds are usually used for attaching EMC VMAX3 or high end
NetApp SAN equipment and can easily sustain parallel I/O. Truth be told,
the most popular versions are 32 Gb, not 128Gb, which is on the bleeding
edge. However, increasingly popular iSCSI over 10Gb NIC cards can not.
First, iSCSI causes a multitude of interrupts which are executing in
interrupt mode and are preventing clock interrupts from occurring, which
means - no time sharing. In other words, no parallelism. Second, many
clients are using normal, run off the mill 10Gb adapters, instead of the
adapters with iSCSI protocol built into NIC.That makes the situation
much worse because Linux native iSCSI implementation is not the best one
and is rather inefficient. Furthermore, Ethernet latency grows with
utilization, which makes the I/O requests slower when the network
utilization reaches around 75%. That is not that hard to achieve with an
OLTP database. As of Oracle 12.2, it is possible to throttle down your
PDB (haven't tried with non-PDB architecture) using MAX_IOPS and
MAX_MBPS, which is where Kevin Closson's SLOB tools comes very handy. I
would advise against using asynchronous I/O on any database that
utilizes iSCSI attached storage.
On 2/29/20 1:43 PM, Noveljic Nenad wrote:
--
With regard to ‘db file parallel read’ with sync IO, there seems to be a difference between Solaris x64 and Linux. On Solaris I’m seeing a bunch of serial pread calls (as opposed to preadv on Linux). The sum of their durations seems to be contained in the wait event time. My observation refers to a non-multitenant 12.1 instance running on Solaris 11.4. Therefore, the async IO would bring a huge benefit, provided there’s enough bandwidth to handle parallel IO calls.
Best regards,
Nenad
https://nenadnoveljic.com/blog