Could you give an example? I don't think I've seen such a thing on any
halfway current Linux kernel.
On Mon, 12 Nov 2018, 17:30 Michael Brown <dba@xxxxxxxxxxxxxxxxx wrote:
You have to be careful with SETALL in ext3 and ext4, you can have serious
issues based on your kernel version.
--
Michael Brown
dba@xxxxxxxxxxxxxxxxx
http://blog.michael-brown.org
On Nov 11, 2018, at 6:47 PM, Ls Cheng <exriscer@xxxxxxxxx> wrote:
I guess for good? I am wondering because I have no experience with SETALL
in XFS, only used in ext3 and ext4 and SETALL works very good.
Thanks
On Sun, Nov 11, 2018 at 9:27 PM Radoulov, Dimitre <cichomitiko@xxxxxxxxx>
wrote:
Yes,
and in our environment it really makes a difference.
Regards
Dimitre
Il giorno dom 11 nov 2018, 20:57 Ls Cheng <exriscer@xxxxxxxxx> ha
scritto:
Hi Radoulov
Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to
SETALL in xfs?
Thanks
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
On Fri, Nov 9, 2018 at 3:46 PM Radoulov, Dimitre <cichomitiko@xxxxxxxxx>
wrote:
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
On 31/10/2018 12:29, Neil Chandler wrote:
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*From:* oracle-l-bounce@xxxxxxxxxxxxx <oracle-l-bounce@xxxxxxxxxxxxx>
<oracle-l-bounce@xxxxxxxxxxxxx> on behalf of Radoulov, Dimitre
<cichomitiko@xxxxxxxxx> <cichomitiko@xxxxxxxxx>
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Cc:* lkaing@xxxxxxxxx; contact@xxxxxxxx; Oracle-L Group
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
what is the problem with direct I/O? You should never run an Oracledatabase through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Il giorno mar 30 ott 2018, 22:38 Andrew Kerber <andrew.kerber@xxxxxxxxx>
ha scritto:
Most places with growing databases and heavy duty environments on
vmware use ASM. Some use XFS or similar and LVM, though I am not fond of
those.
On Tue, Oct 30, 2018 at 4:34 PM Leng <lkaing@xxxxxxxxx> wrote:
Asm is great when you plan correctly. If you don’t it’s very painful.
Eg. If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago.
I have since moved on...
Cheers,
Leng
On 31 Oct 2018, at 7:20 am, Stefan Koehler <contact@xxxxxxxx> wrote:database through page cache anyway :)
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
I would go with tweaked XFS (e.g. "nobarrier" as this information is
just one single instance in this VM.
um 19:12 geschrieben:
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Twitter: @OracleSK
"Radoulov, Dimitre" <cichomitiko@xxxxxxxxx> hat am 30. Oktober 2018
synchronous, right?
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
enable async I/O without turning on direct I/O too.
And if I understand correctly Note 1987437.1, on Linux you cannot
----
Regards
Dimitre
//www.freelists.org/webpage/oracle-l
//www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>